Monthly Archives: October 2014

Agile design events and decisions options

Which are the concerns that should be considered for taking design decisions in an Agile Context?

Let think first about which are the “design events”, the moments in the life-cycle when design decisions are taken. The best description of these events could be found in Agile Modeling method (and inherited in DAD-Disciplined Agile Delivery):

  • Architecture Envisioning
  • Iteration Modeling
  • Look Ahead Modeling – opportunistically “model a bit ahead”
  • Model Storming – just in time modeling
  • … and Coding

TDD and Refactoring are rather ongoing activities, but main design decisions related to them could be also associated  with the any from the above mentioned events.

All the considered design concerns are generic, but some of them are more important in an Agile context (read Adaptive).  This is the proposed list:

  • New versus change: What existent parts will be affected and what is new? Sub-systems, components, external interactions.
  • Clean Architecture: Which are the changes in the business representations and which are the changes in the boundaries with frameworks and drivers.
  • Separation of Concerns: what we need to separate/decouple for Clean Architecture and for TDD?
  • Technical Debt: What Technical Debt we have to manage in the affected existent parts?
  • Cleanup: How much cleanup we need? How much Technical Debt should be payed now?
  • Adapt: What is the need of adapting (design) for affected parts, in the current known requirements context?
  • Adapt design techniques: which techniques should be used and in which amount? Refactoring or Redesign?
  • TDD: the main needed tests and decoupling decisions

Putting all together, we can model an Agile Design Decision Matrix with events and options.

Depending on the moment of the life-cycle, the magnitude of these aspects should be bigger or smaller, but I do not think that any of them should be skipped.

Adapt or not adapt? Cleanup is needed or is not needed? We need or we do not need auto-test?
All are possible options, and any answer it is a design decision.

Important: The answers to these questions will shape the needed work and must be used to estimate time and cost. Each of these events it is a moment of planning and re-planning.

Agile Design:  Simple Clean (& Tested) Design + Adapt

Upgrading Refactoring: Clean vs Adapt, Clean Code/Refactor/Re-Design

If I had to pick one thing that sets XP apart from other approaches, it would be refactoring, the ongoing redesign of software to improve its responsiveness to change.  – Jim Highsmith

The easiest technical debt to address is that which you didn’t incur to begin with. – Scott W. Ambler

 Refactoring definition – What, How and Why

There is a great work related to Refactoring, starting with the book “Refactoring: Improving the Design of Existing Code” of Martin Fowler and its collaborators (Kent Beck, John Brant, William Opdyke, Don Roberts) and continuing with the XP – Extreme Programming ecosystem of practices, that include Refactoring as the fundamental Agile practice.

The definition from Martin Fowler, at www.refactoring.com:

<<In the Refactoring Book, I made the following definition of “Refactoring”

 noun: a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior

 verb: to restructure software by applying a series of refactorings without changing its observable behavior

 Refactoring isn’t another word for cleaning up code – it specifically defines one technique for improving the health of a code-base. I use “restructuring” as a more general term for reorganizing code that may incorporate other techniques.>>

Some comments:

  • The objective could be generalized: Improving the design of existing code, without changing the observable functionality, must be performed for improving the health and economics of the software, where “easier to understand and cheaper to modify” are only some of the objectives (others: less defects, easier to detect the defects)
  • Clean is just one “technique“… Which are the others?
  • The extended definition contains also this part:  “Its heart is a series of small behavior preserving transformations. Each transformation (called a “refactoring”) does little, but a sequence of transformations can produce a significant restructuring. Since each refactoring is small, it’s less likely to go wrong. The system is kept fully working after each small refactoring, reducing the chances that a system can get seriously broken during the restructuring.”- Martin Fowler, at refactoring.com.

Improving the design of existing code, without changing the observable functionality, must be performed for improving the health and economics of the software.

.

Refactoring versus waste – Clean versus Adapt

Refactoring offers a process solution for design flexibility. A possible approach for getting a flexible solution could require some up-front design based on imagined future requirements. In many cases, these requirements never become real and that it is a waste. Martin Fowler says: “Refactoring can lead to simpler designs without sacrificing flexibility. This makes the design process easier and less stressful. […] Once you have a broad sense of things that refactor easily, you don’t even think of the flexible solutions. You have the confidence to refactor if the time comes.

In this case, the better design will come later, when the new context will be revealed.

Let’s think about some other kind of refactoring types, the ones that fix the majority of the design smells such: function and classes’ sizes, multiple responsibilities, magic numbers, duplicated code, meaningful names, comments and others. In this case, when these rules are applied or corrected later, the waste is bigger and even in exponential manner.

That seems contradictory: sometime it is better to refactor later and sometime is better to refactor sooner. Well, it is not! “Flexibility” and “Design smells” are two different problems.

There are two kinds of design decisions involved here:

  • Context independent – the ones that makes the code clean
  • Context dependent – the ones that are adapted to the current context

 Clean and Adapt are two different things and each one must be managed accordingly.

The sequence of decisions related to the design could be like this:

  • First development: Simple and clean first code – apply Simple Design and Clean Code. Use some “in development” refactoring, but try to write clean from the first time.
  • Next development: Adapt the first simple and clean design to the new context and perform also necessary cleaning.

The statement “You build the simplest thing that can possibly work.” should be reformulated as “You build the simplest clean thing that can possibly work”.

Sources of waste: any wild guess about future contexts, but also any mess remaining from previous development.

.

Reformulating the process

We have two kind of activities: Clean and Adapt and more practices that could realize their objectives:

  • Clean Code – write the code clean from the first time
  • Refactor – improve the design, while preserving observable behavior; executed in small transformations, while keeping the system stable
  • Redesign – similar with refactoring, but not necessary in small steps (with increased risks)

.

Clean Code & Refactoring

Both practices are introducing a set of design rules, specified in two books, where “Clean Code” it is creation of Robert C. Martin and “Refactoring: Improving the Design of Existing Code” of Martin Fowler and its collaborators (Kent Beck, John Brant, William Opdyke, Don Roberts). Logically it is a single set of rules that:

  • Keep the code clean
  • Are rather context independent

.

Keeping the code clean – practices and approach

In most of the cases, the design decisions related to a clean code are context (requirements) independent. In order to eliminate waste, the code could be kept clean as follows, using these practices:

  • Clean Code: write the code as clean it is possible from the first time
    • Refactor “in development”: earliest refactoring, during the development is the cheapest
  • Refactor “legacy”: paying the debt of dirty code as early is possible is the cheapest variant, when the debt already exist
  • Re-design “legacy”: only in cases where Refactoring (small steps) it is not possible, because the risks are increased in this case and the testing it is more difficult.

Simple Design it is considered as default complementary practice for all these cases.

Sources of waste:

  • Writing first code dirty is the most massive source of Technical Debt and refactoring it is not efficient/effective
  • Technical Debt reduce productivity, quality, agility and increase system rigidity and fragility
  • The cost of paying the Technical Debt could increase exponentially when it is accumulated in high amounts of dirty code

.

Adapt Design rules – practices and approach

These kind of decisions – adapt the design – are context dependent. In order to eliminate waste, the best time to take such decisions is to defer them until the context it is revealed. A possible approach could be:

  • Respect the above clean code rules
  • Simple (Design) and Clean (Code) – “build the simplest clean thing that can possibly work” for the current context/requirements
  • Adapt “legacy” design in new context – adapt the previous simple design to the new context /requirements, when this new context it is revealed (Simple Design it is applied again):
    • By Refactoring
    • By Re-design ( see comments for re-design from Clean Code rules)

Important: Refactor/Re-design to adapt the previous design must be done when this previous design does not match exactly to the new context. There are some practices that for Adaptive Design that increase the chance for an emergent design with less work related to adaptation – See “Roadmap to an Agile Design.

Simple Design Formula: Simple Design (context 1) + Adapt & Simple Design (context 2) + …

Sources of waste:

  • Premature design decisions based on wild guesses about future requirements, as opposite to Simple Design
  • Skipping the adapt design decisions needed in the new context
  • Trying to evolve the design only with these adapting tactical steps without any strategic approach for emergent design as Functional Cohesion and Separation of Concerns from Clean Architecture

Effective/Efficient “Clean”: clean and simple design first and then refactor.

Effective/Efficient “Adapt”: clean and simple architecture (& design) first and then adapt.

.

Process level upgrades – DAD style

The above logic could be re-formulated, refactored (sic!) again in terms of:

  • Process goals: keep the code clean, adapt the design (previous simple design in new context)
  • Goal options: An alternative for Refactoring is Re-design, where Refactoring is the default option and Redesign (with increased risks) should be considered only when the Refactoring is not possible
  • Note: Strategic options are complementary to tactical options and not alternatives

This process level logic it is used by DAD – Disciplined Agile Delivery, and finally help to fill the process gaps from other methods or a custom process and offer a much better support for process guidance.

For the Adapt goal ,  Refactoring is the core tactical technique, but in fact there are many practices that contribute to this goal:

  • Examples of tactical practices for Adapt goal: Refactoring, Test first, Re-design, Model Storming, Look Ahead Modeling.
  • Examples of strategical practices for Adapt goal: Clean Architecture, Continuous Integration.

More: a generic goal it is Design Envisioning & Change,  where the two main aspects are what is new and what is adapted. For an Agile context, based on Simple Design, the Adapt aspect is fundamental and has a “big share” in the process.

 There is not only Simple Design, there are “twins”: Simple Design + Adapt

Reactive & Adaptive Products: built-in feedback

Reactive Manifesto

The Reactive Manifesto, initiated by Jonas Bonér, want to respond to the new context with “with multicore and cloud computing architectures […] with tighter SLAs in terms of lower latency, higher throughput, availability and close to linear scalability”

See more at: https://typesafe.com/blog/why_do_we_need_a_reactive_manifesto%3F#sthash.gP9WznCO.dpuf

Some default (software) systems requirements should be in such case these ones (quote, from below mentioned source):

  • react to events: the event-driven nature enables the following qualities
  • react to load: focus on scalability rather than single-user performance
  • react to failure: build resilient systems with the ability to recover at all levels
  • react to users: combine the above traits for an interactive user experience

See more at: https://typesafe.com/blog/why_do_we_need_a_reactive_manifesto%3F#sthash.gP9WznCO.dpuf

 

Requirements to be reactive

Well, that sound very interesting, but when it is transformed in the Manifesto (http://www.reactivemanifesto.org/), the resulted thing is a combination of requirements and solutions, because “message-driven”, async messages it is a solution, for example. My personal taste is to keep first the requirements, that seems to be more generic:

  • react to events, react to load, react to failure, react to users

 

 Questions

I have some start questions

  • Before production: how could I know that I will build a system that will “enough reactive”, that will react enough to the load?
  • During production: how could I know that the real loading it is not bigger than estimated loading, for example, and what are the exact numbers in case of incidents?

 

 Traditional answers and beyond

The “traditional answer” is : “ok…we will do the needed performance tests“. Right! What about second type of questions, real loading and incident (or needed) analysis ? The next answer is that we will need also data gathering tools  built-in in the product, not (only) in offline tests.

 

 Adaptive products: built-in feedback

If we need to follow REACT types of requirement or similar, in an adaptive context (read Agile), we need to follow the principles of Adaptive Products and make sure that will not miss the link related to the early and continuous feedback from business/production back to the products. In such cases, the product itself should have the above mentioned data gathering tools  built-in in the product, not (only) in offline tests  (with a possible smarter option of self assessment).

 

Consumable Solutions: Measurable!

Reactive and similar properties of the software systems cannot be claimed if are not measurable in production and in tests. Using this feedback, the system could be configured or adapted by adequate changes.  Such capabilities of the products must be part of of the criteria for consumable solutions (concept used and introduced by DAD – Disciplined Agile Delivery). If the real capability of the system is not known, if there are incidents and if the needed information to adjust according to the needs  it is not available just-in-time, then it is not a consumable solution.

Some simple examples:

  • Events most be registered with occurring timestamp , start and end processing timestamps,  type of response
  • Details sub-processing data must be gathered if are significant
  • Number of events/processings should be registered if are bigger then and estimated threshold
  • System internal availability of various resources should be available for monitoring
  • More info should be “dumped” on incidents and fails

.

Update – Critical versus Important

(After a discussion thread with Martin Thompson on Reactive Systems Google group)

There are 2 kind of cases:

  • Systems where REACT capability is critical (such some financial systems) – the “elite
  • Systems where REACT capability it is important, but not critical – the “majority

The critical cases (the “elite“) need a complex approach with external and internal measurements, with specific care for high amount of non-linear data/events and accuracy of measurements in such context. In many cases, these are domains with specific regulation that request a mature approach for measurement.

The “majority” instead have rather an insufficient support for measurement. Also, in such cases, the cost of measurements could be important.  The best approach should consider smart, opportunistic  solutions, that will maximize the return of investment with less effort.  The built-in support for feedback could offer such solutions because have access to the “intimate knowledge” of the system, that mean most effective/efficient measurements..

The lesson is: Even if you know exactly what is going on in your system, measure performance, don’t speculate. You’ll learn something, and nine times out of ten, it won’t be that you were right!” – Ron Jeffries – “It Takes Awhile to Create Nothing” (from <Refactoring: Improving the Design of Existing Code>)

%d bloggers like this: