Tag Archives: Clean Architecture

Architecture: 2 n-dimensional layers

Original Sin: Unidimensional Asymmetry

Back in the day, I was delighted when I first found this three layers’ architectural pattern:

layers

Finally, some order in the chaos!

 … but, trying to use something like this in practice was somehow difficult: there are other aspects that does not fit to any of these layers.

The next interesting thing that come up to me was “Layered class type architecture” from Scott W. Ambler “Building Object Applications That Work”:  Interface, Process, Domain, Persistence, System.

Now was clear that, unfortunately, something is wrong with the 3-Layers Architectural Model. … and the “original sin” is the unidimensional asymmetry: the UI-Persistence “line” is the unique dimensions and many other software areas cannot fit here.

The fix: Symmetry!

On of the first actionable and symmetrical model was the UML (Jacobson) robustness diagram. Any use cases could be represented as being realized by objects that are instances of three categories of classes boundary, control and entity

  • Boundary, Control, Entity

That is simple, beautiful and symmetric: UI and other I/O (Input/Output) aspects are managed in non- discriminatory manner.  One problem: too few guidance.

Koni Buhrer “Universal Design Patterns” goes a little further – four design elements & their recommended interaction could describe everything and offer a symmetrical model:

  • Data Entities, I/O servers, Transformation Servers, Data Flow Managers

It is symmetric, it is logical, actionable, but still too few guidance.

What we could need more?

Robert C. Martin and with Clean Architecture and Alistair Cockburn with Hexagonal Architecture have some beautiful answers and guidance (see Uncle Bob biblio for other authors similar works).

With Hexagonal Architecture name, Alistair Cockburn want to make clear the main defect of classical model  – unidimensional  lack of symmetry: <Allow an application to equally be driven by users, programs, automated test or batch scripts, and to be developed and tested in isolation from its eventual run-time devices and databases. > . This architecture model will decouple use-cases (~ the application) and the I/O dependent on technology: “When the application has something to send out, it sends it out through a port to an adapter, which creates the appropriate signals needed by the receiving technology (human or automated).”  (quotes from mentioned A. Cockburn post)

Robert C. Martin Clean Architecture make a step further on strategically separating the concerns and present these software areas (quotes from Uncle Bob recommended article )

  • Entities – representing the functional part that is application-independent, working at a higher level: domain or enterprise
  • Use cases – representing the functional part that is application-dependent, and realize the flow of data using the business domain and enterprise rules
  • Interface Adapters – “set of adapters that convert data from the format most convenient for the use cases and entities, to the format most convenient for some external agency such as the Database or the Web”; on Uncle Bob View MVC and MVP parts goes here
  • Framework and drivers – “frameworks and tools such as the Database, the Web Framework, etc.” And what is most important, according to Uncle Bob, – “This layer is where all the details go. The Web is a detail. The database is a detail.”. Yes, technologies & deployment mode is not the core of your application, it is what will change independently and do you want to decouple them.

I strongly recommend all above described symmetrical models: Robustness Analysis (Jacobson), Universal Design Pattern (Koni Buhrer), Hexagonal Architecture (Alistair Cockburn), Clean Architecture (Robert C. Martin) or Layered Class Typed Architecture (Scott W. Ambler) that are logically equivalent with more or less details.

Simplified representation of symmetrical models:

one-flow

Comments

  • Flow control orchestrate I/O parts (symmetrically) and Business Logic, where Entities are the exchanged data
  • If the Inversion of Control it is used and the participants to the flow are represented by interfaces, the design will be more adaptable and testable

What we could need more?

There is some asymmetry!

generic

There is some asymmetry in logic architectural areas that is skipped somehow by current models of Hexagonal or Clean Architecture, and this is the reason behind the original mistake of unidimensional linear model:

  • There are two kind of interactions for the systems
    • The client level interaction – the one with the systems actors
    • The resources level interaction – the one to access the its resources (such databases, heavy processing)
  • The client level interaction
    • Contain – links the system with its clients/actors
    • Dedicated flow control to manage (possible) client session state (interaction state)
  • The resources level interaction
    • Contain – link the system with its resources
    • Dedicated flow control to manage (rather) stateless access to these resources

asimetric_2

Comments

  • Each layer is represented in a simplified form here. In fact, it could contains full set of software areas: Entities (Business/Enterprise), Flow Control (Use Case) and I/O adapters.
  • Each layer is symmetrical, but the aggregate is asymmetrical, based on actor-resources vectorization

Scaling aspects (performance and others)

Service Layer

  • can manage the access to resources with the benefits of its stateless model, using various patterns and tools that are based on this aspect.

Interaction Layer

  • can manage in a decoupled way the possible scaling issues of clients’ data cache
  • idem for managing the external interaction aspects

 

Examples – same services with various forms of interaction

  • Generic Model

generic

  • UI based Module
    • Interaction with the Human User Client is designed with MVP pattern and provide access to system resources via a set of three services, where interaction layer also orchestrates the calls of these (resources access) services

example-ui

  • “Flat” Flow Background Module
    • The client/actor is an internal timer. When it is invoked a single (resource) access server, the interaction layer flow could be “flat” (just redirect the call)

example-back-simple

  • Complex Flow Background Module
    • In a similar case, but when the background run it is similar with one or more actions from UI based module: the interaction flow will keep also the state of the client session between successive services calls

example-back-complex

Notes about Microservices

Microservices could be a choice for architecture decisions, but my problem is all the talk about “monolith-application” versus “microservices”. I recommend to read first these Robert Martin articles:

 In the above proposed model, the “service”

  • Has a clear responsibility – manage the access to the system resources
  • Has a rather “plain interface” – the service itself is decoupled from its deployment mode
  • These areas are decoupled: the services, the way how actors interact to the systems to access them, and their deployment mode. Services are reusable for more types of interactions and more type of deployment

How the microservices fit with this model:

  • The service it is in the same micro-monolith (sic!) with its deployment mode
  • The client interaction flow mode is out of scope, but it is forced to access a fixed deployment mode (with afferent technology)

Update (May 15, 2017)

Just find from an Alistair Cockburn tweet (see bellow) that he has updated his “Hexagonal Architecture” with a similar “asymmetry” concept:

<<New note on at . Primary/ secondary actor asymmetry (end of the article) Take look if you like>>

Few comments about the difference between the two similar viewpoints:

  • I still believe that main split is between cell related to “client interaction flow” and the cell related to “resource access service(s) flow”
  • The case when we have primary actors and secondary actors it is just a particular case when the resource access flow could be assimilated to an internal use case and/or with a distinct component
  • In a generic case, there is the possibility that “client interaction flow” to be in the same use case and in the same component as the “resource access service(s) flow”
  • 2 cell architecture is never about <<UI and back-end services decisions>> (see Cockburn post section “Separating Development of UI and Application Logic“): we have in the same time flow control, business and technology in both “front-end”, interaction cell and in “back end” resources access cell
  • UI is not the only way for interaction with system clients: we can have, for example, external request/response interactions or scheduling/timers, listeners etc.
  • 2 cell architecture it is not only about separating adapters and ports, it is about separating also the flows in these two categories: client interaction flow and resource access orchestration flow. The whole application is split in two cells, not only the port and adapters.

 

Bibliography

“Layered class type architecture” from Scott W. Ambler

Universal Design Pattern – by Koni Buhrer

Hexagonal Architecture – by Alistair Cockburn

Clean Architecture – by Robert C. Martin

Microservices – by Martin Fowler

Refactoring: strategy & collaborative work

Introduction –  The extended definition of Refactoring contains also this part:  “Its heart is a series of small behavior preserving transformations. Each transformation (called a “refactoring”) does little, but a sequence of transformations can produce a significant restructuring. Since each refactoring is small, it’s less likely to go wrong. The system is kept fully working after each small refactoring, reducing the chances that a system can get seriously broken during the restructuring.”- Martin Fowler, at refactoring.com.

Note: this imaginary dialog it’s inspired from an intensive and extensive practice.

What if we will have to perform a Big Refactoring? 

Refactoring cannot be big, it is a special kind of redesign, performed in small steps. See the above definition again.

Reformulate: I need to do a lot of refactorings to clean a legacy code.  There is any best practice?    

You need a strategy?

Yes!

Well, agile and software engineering offer a lot of tactics, including the Martin Fowler refactoring sets but almost no strategy, except … you should work clean from the first time. Use Martin Fowler indications from the start (or early enough), also Uncle Bob Clean Code and Clean Architecture.

Hey! I already have a lot of legacy code debt to solve!

Ok! Let’s build a strategy: how do we start?

This was my question!

Refactoring supposed improving the design, while preserving the functionality. Tests included. Do you have good requirements specifications or a good set of automated tests?

Not in this case.

Then you should recover functionality knowledge from the code and put/perform incrementally some tests. Better: functionality should be explicitly represented in the code and should be testable. And remember: there are two kinds of functionality…

Two?

Yes, first, the one that is application-independent and represent the target domain (domain business rules) and the one that is application-depend aka flow of control.

I remember: that sound like Uncle Bob Clean Architecture.

Yes. You will need to be able to apply distinct tests to them, without mix them with other concerns such as UI, persistence, network and others. Anyway, where do I usually start? I will try to make the running scenarios very clear into the code and that mean the flow of control.

In English, please?

I want to clearly see these: were the event triggered by the system actors start and the end-to-end full path until return. More, I want to refactor to make this path clear enough.

How could be not clear? ­­­

Global context. If the functionality path chaotically accesses the global context, then we could have undesired intersections with other paths/scenarios, that will compromise both data and function. In the same time, we can decouple flow/orchestration from specialized concerns.

What we get?

We will have explicit representation of the functionality (with no undesired contacts with other flows), needed for tests (we can apply auto-tests on it). Also we will have the first entry points to the specialized parts that also could be <decorated> with some tests. Then we can apply tactical refactoring as we need.

And …the domain business rules?

Must be decoupled from other concerns and you have to dedicate them specialized design/test elements.

That’s all?

Almost. You need to test any redesign. Tests need knowledge about functionality. If some parts are missing, now it is the time to recover them in auto-tests (preferable) or in other form of specification.

How do I know that recovered requirements are correct?

You don’t. More, you should always suspect that spaghetti-like legacy code include many unobserved bugs. You should validate these functional requirements by intensive collaboration with your colleagues, with domain experts, customer and other stakeholders.

Do you have any idea about how to do that?    

Start with Pair Programming (refactor in pairs). Pairing is not enough, and you will probably need more people involved – use Model Storming: discuss the resulted functionality with more colleagues.

Model Storming?

Yes, it is an agile practice, part of Agile Modeling (and Disciplined Agile) and it was created to complement core practices from XP. Also, you should actively involve your stakeholders in validating the recovered functionality…. Active Stakeholder Participation, that it is another Agile Modeling recommended practices. And at the end you will have more free bonuses.

What bonuses?

Functionality it is easy to accurately read from code (seconds!) and your colleagues and your stakeholders will already have acquired the recovered functional knowledge.

Summary –  Refactoring for significant spaghetti legacy code need tests/testing. Usually, knowledge about functionality necessary for testing it is insufficient, so must be recovered from the code. An effective & proven way to do that is to apply Clean Architecture principles: decuple both domain rules and application specific flow of control (aka use cases). Anyway, legacy code with too much technical debt will contain a lot of bugs, so recovered functionality it is inaccurate and need to be validated.  Knowledge & expertise needed for validation it is distributed among team members, domain experts, customers and other stakeholders, so you need to work in a collaborative manner with all mentioned parts. There are some outstanding software engineering and agile practices that could help on this aspect:

Note: “need” and “necessary” are often use in above text, just because we have followed the logical path of necessary things for testing a redesigned legacy code.

Remember: A lot of technical debt ~ inaccurate functionality. To refactor & test, you must re-start the process & collaborative work from functional requirements acquisition.     

Limits of inspected-in quality

 “All necessary test cases, starting from requirements”

Quality has two major “sources”: built-in quality (build without defects) and inspected-in quality (test, find defects and fix them).

Poor built-in quality is – statistically speaking – a known and common problem in software development. In many cases, with product growing,  quality will decrease in time by accumulation of technical debt. There is a wide spread belief that we can improve the quality (even in difficult cases) based mostly on test & fix approach.  We will try to prove that is “mathematically” wrong and it is rather a wishful thinking.

(Traditional voice) We will test & fix, right? And we will do that in the most professional way. We will document very well the requirements. We will generate all the necessary test cases starting from requirements. We will use them to execute very professional tests and we will make all the significant need fixes for a good quality.

Yes, indeed, a professional way to generate the test cases will start from the requirements, will identify functional flows, will generate running scenarios and then the test cases considering input data, test conditions and expected results.

(Traditional) That should be great! Even TDD implement the test cases considering all these elements.

That is true, but there are difference: TDD offer something more. Anyway, you forgot something …

How to blow up traditional testing

Let consider a set of pretty complex set of functionality. The orthodox approach of testing will do that:

  • (We have good analysts and good testers)
  • Write good enough requirements
  • Extract scenarios paths from functional flows and states transitions
  • Generate test cases from scenarios considering also inputs, conditions and expected results
  • Run the tests, find defects and fix them

That could take longer, but finally we will reach the quality goals, right?

NO!

Please consider this extra-scenario: what if we have too much technical debt. Let see some consequences. (used numbers are examples).

Scenarios –  If the requirements logic generates 100 scenarios, poor designed code could physically have 600 or more. How is that possible? It is pretty easy by accessing, for example, an undesired global context. If my scenario will access a global context (even only one global variable…), that will multiply the number of real test cases because these data are changed in an unexpected way (not specified in the requirements) by other flows. In fact, the global context will mix and multiplexes piece of functionalities that are not logically related. Duplicate code it is also a mighty enemy that create and multiply the test cases (“phantoms”, beyond the ones resulted from requirements). If we want to change a formula that was harcoded on each usage, how could really know a tester where was harcoded and how many times?

States – If the requirements logic suppose 50 states of the main entities & main logic, a poor designed code could physically have 500.  How is that possible? Again, that it is easy. For example, if the states transitions for one entity is not encapsulated, the code could be fragile enough to induce supplementary phantom states, because of poor written state transitions (one ways is to use only basic setters to realize these transitions).

Expected results – a poor design code will damage also the result display, log or persistence. The easiest way is to damage the displayed data: “catch” the errors and not display them, display a valid value for a null etc.

Let make a summary:

  • We have 400 official test cases for 100 scenarios and 50 states transitions and that cover the requirements
  • Physically, the poor designed system has more than 600 scenarios, 500 states transitions that will be covered by more than 2000 test cases

We will start testing – running 400 test case for a system that need 2000 test cases:

  • We will fix the defects that belong to the 400 official test cases
  • The testes will make some supplementary explorations and we will catch some of the defects from phantom test cases and phantom state transitions
  • A lot of defects will remain unobserved, mostly from those “undocumented” test cases

(Traditional) Wait!! The team – developers, testers and analyst will not discover the problem? We will test more and will fix more defects!

Based on experience, that will almost never happening! There is almost zero chance to generate all the “undocumented” test cases only by pure exploration: the tester has no clue about where are most of the hidden, phantom test cases.

(Traditional) There should be a way to solve that!

There is one: when a tester discovers some phantom test cases, when the expected results are damaged then that tester must report this quality problem: “we have test-damaging defects, please analyze them, and fix the root cause”.

Test and fix cannot protect you from test-damaging defects

(Traditional) That sound good enough!

It is not! It is too late to discover at testing that we have a such poor code, and a such poor build-in quality that will affect the tests itself and will cost too much.

Some conclusions

There is nothing wrong to generate in an orthodox manner the test cases from requirements. Just that you also physically implement these requirements, in order to make the generated test cases effective. The developer must not have the “liberty” to implement phantom scenarios, phantom states, to mix data and flows in a way that was not specified by the requirements.

We need to physically implement the requirement

It is not effective, inefficient, unethical and unprofessional to build a system that cannot be tested – where the real system test cases are much more that requirements-generated test cases (and are practically impossible to generate on testing time).

Traditional way of trying to get the quality mostly by test and fix it is many case a cognitive bias (inadequate logic), that bring a spiral of undesired results. The Martin Fowler Technical Debt Quadrant logic it is applicable in this case.

What could be done?

We need to reduce as much is possible the impedance between requirements specification and physical implementation. Some examples:

  • Physically “protect” the business. Separate the business aspects and do not duplicate
  • Basic design principles: do not duplicate, do not access global context
  • Physically “protect” the functional flows (separate: do not mix with other aspects)
  • Physically “protect” the logic of timing sequences (see Uncle Bob “Clean Coders” videos)
  • Physically “protect” the logical states transitions
  • Prevent and fix test-damaging defects

What about TDD?

TDD is in the list of things that help us to physically implement the requirements. The ultimate requirements are the test cases, were TDD will physically implement the test cases. The only major limit of TDD is that almost never will implement all the test cases.

The ultimate requirements are the test cases

… but we almost never implement all test cases in auto-tests.

Anyway, from industry experience, the examples from the above list of are needed also to enable the TDD.

Software architecture – searching for a better definition

 Software architecture – land of confusion

Architecture it is still a big elephant in the room for software engineering in general and agile in particular. We do not want to make supplementary mistakes because of unappropriated definition or understanding.

Possible problems:

  • Focus on “paper architecture” instead of proven architecture by working software
  • Focus on structural aspects and disregards others because lack of understanding for the behavioral dimension or the needed consistency using an architectural style
  • Poor real architecture because the goals of the architecture are not understood very well

You can find below some comments about some definitions from significant sources, similarities with construction domain and Uncle Bob notes about “fundamental architecture”.

Wikipedia definition

 “Software architecture refers to the high level structures of a software system, the discipline of creating such structures, and the documentation of these structures. It is the set of structures needed to reason about the software system. Each structure comprises software elements, relations among them, and properties of both elements and relations. The architecture of a software system is a metaphor, analogous to the architecture of a building.” (https://en.wikipedia.org/wiki/Software_architecture)

 According to this source, architecture is the:

  • Structure of the system ….
  • Process of creating …
  • The documentation …
  • A metaphor …

Strange, I could bet that is none of these things! Here are some comments.

Structure … A software solution (“system”) it is represented by both structural and behavioral dimensions. Of course, if there are sub-systems and components, encapsulating both structure and behavior, we could metaphorically (!) say that these subsystems/components are the “structure”.

Process … it is not – it is just a language convention to use “architecture” term also for the “process of creating the architecture”.

Documentation…it is not. Only if the documentation is 1:1 with the real architecture, we could use the term “architecture” for the documentation. For most of the software systems, there are significant differences between paper architecture and real architecture. A better term it is “architectural description”.

A metaphor… we can use metaphors, but if we will implement them, we will build a poem, not a software system.

MSDN definition

 “Software application architecture is the process of defining a structured solution that meets all of the technical and operational requirements, while optimizing common quality attributes such as performance, security, and manageability. It involves a series of decisions based on a wide range of factors, and each of these decisions can have considerable impact on the quality, performance, maintainability, and overall success of the application.” (https://msdn.microsoft.com/en-us/library/ee658098.aspx)

Process … see above comments.

Quality attributes … another metaphor. Usability, Performance, and others are non-functional requirements, and the quality is the degree of adherence to any kind of requirements. The architecture must fulfill all types of requirements, including the non-functional ones.

IEEE Standard 1471-200 definition

3.5 architecture: The fundamental organization of a system embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution.”

Components … This definition could work for a component-based architecture, where both structure and behavior are intuitively represented using components. In many cases, various structures and behaviors are mixed, “melted” in a single component, and this definition it is unusable.

SEI definition

 “The software architecture of a program or computing system is a depiction of the system that aids in the understanding of how the system will behave.

Software architecture serves as the blueprint for both the system and the project developing it, defining the work assignments that must be carried out by design and implementation teams. The architecture is the primary carrier of system qualities such as performance, modifiability, and security, none of which can be achieved without a unifying architectural vision. Architecture is an artifact for early analysis to make sure that a design approach will yield an acceptable system. By building effective architecture, you can identify design risks and mitigate them early in the development process.” (http://www.sei.cmu.edu/architecture/)

 A description, a blueprint… See “Documentation”

Defining the work … see “Process”

Artifact … see “Documentation”

A better definition

Philippe Kruchten, Grady Booch, Kurt Bittner, and Rich Reitman, Rational ( starting from the work of Mary Shaw and David Garlan) are using this definition:

 “Software architecture encompasses the set of significant decisions about the organization of a software system including the selection of the structural elements and their interfaces by which the system is composed; behavior as specified in collaboration among those elements; composition of these structural and behavioral elements into larger subsystems; and an architectural style that guides this organization. Software architecture also involves functionality, usability, resilience, performance, reuse, comprehensibility, economic and technology constraints, tradeoffs and aesthetic concerns.”

(See https://pkruchten.files.wordpress.com/2009/07/kruchten-090608-agile-architecture-usc.pdf)

This seems to be a much better definition. Architecture encompasses the set of significant decisions about the software system organization, including more aspects: structural elements, behavior, composition of these structural and behavioral elements and an architectural style.

 Summary & Conclusions

Definition – Architecture is about the most important decisions related to the software system (the software solution), including various aspects (see Krutchen& definition)

Requirements and quality – Architecture should address all kind of requirements: functional, non-functional and constraints. The architecture contribute to the system quality, that it is adherence to all these requirements. A modern approach for a better quality is to have a “built-in quality”, and architecture has a major role here.

Process – There is a process of realizing the architecture – “architecting” – that could be different for various software process and design approaches

Documentation – We could document the software solution less or more. Representing the most significant decisions related to the software solution, architectural aspects are the first candidates for a such documentation. A good approach is to have also all architectural intents and decisions directly visible in resulted product.

Similarities with construction domain (?)

In construction, the architecture it is usually split from engineering aspects: structural engineering, mechanical, electrical. In software domain, all the aspects of the solution are part of the architecture. More: the solution in the construction domain is less evolutionary, and the documented architecture & documented engineering (the artifacts), are less likely to be different than the final result.

Uncle Bob: fundamental architecture and details

For Robert C. Martin, the architecture seems to be very similar with the one from construction domain, but in fact I think he use a metaphor to point out what matter most on creating a software architecture.

 “Is the web an architecture? Does the fact that your system is delivered on the web dictate the architecture of your system? Of course not! The Web is a delivery mechanism, and your application architecture should treat it as such. The fact that your application is delivered over the web is a detail and should not dominate your system structure. Indeed, the fact that your application is delivered over the web is something you should defer. Your system architecture should be as ignorant as possible about how it is to be delivered. You should be able to deliver it as a console app, or a web app, or a thick client app, or even a web service app, without undue complication or change to the fundamental architecture.” (See http://blog.8thlight.com/uncle-bob/2011/09/30/Screaming-Architecture.html)

This is my translation: Martin try to correct those interpretations, where the Web – a delivery mechanism, as he mentioned – it is considered an “architecture”. And it is not! It is a detail (an architectural detail). This “detail” should not affect the rest of the architectural decisions, which could be far more important. Forr this reason details should be decoupled (“as ignorant is possible”) from the “fundamental architecture”.

What is this “fundamental architecture” ?

We can consider the part that realize the functional requirements as fundamental and should not be mixed with frameworks, delivery mechanism and other “details”. That it is the “Form follow function” principle applied in software engineering. If we search carefully, we can discover that many recommendations for a good software design contains a such approach.

%d bloggers like this: