Category Archives: Adaptive products

Evolutionary design?

Yes, it will be very useful a software design that can evolve (developprogress, make progress, advance, move forward, make headway, maturegrow, open out, unfoldunrollexpandenlargespreadextend).  I do not want to rewrite a lot of code from the scratch from time to time.

The main question is if it is sufficient ?

In fact, I will want a software that could follow the business.

That mean this software could be changed with the business needs changes. These changes could be small or complex, could be rare or often.  Or <even late>.

That mean this software could be changed with reasonable cost, without unnecessary waste.

That mean this software could be changed indefinitely change easy and economically.  The software will not have too much technical debt and the team will be always be prepared and <in shape>.

That mean my software & my process should be Adaptive and Lean.

Agile.

  

 

Limits of inspected-in quality

 “All necessary test cases, starting from requirements”

Quality has two major “sources”: built-in quality (build without defects) and inspected-in quality (test, find defects and fix them).

Poor built-in quality is – statistically speaking – a known and common problem in software development. In many cases, with product growing,  quality will decrease in time by accumulation of technical debt. There is a wide spread belief that we can improve the quality (even in difficult cases) based mostly on test & fix approach.  We will try to prove that is “mathematically” wrong and it is rather a wishful thinking.

(Traditional voice) We will test & fix, right? And we will do that in the most professional way. We will document very well the requirements. We will generate all the necessary test cases starting from requirements. We will use them to execute very professional tests and we will make all the significant need fixes for a good quality.

Yes, indeed, a professional way to generate the test cases will start from the requirements, will identify functional flows, will generate running scenarios and then the test cases considering input data, test conditions and expected results.

(Traditional) That should be great! Even TDD implement the test cases considering all these elements.

That is true, but there are difference: TDD offer something more. Anyway, you forgot something …

How to blow up traditional testing

Let consider a set of pretty complex set of functionality. The orthodox approach of testing will do that:

  • (We have good analysts and good testers)
  • Write good enough requirements
  • Extract scenarios paths from functional flows and states transitions
  • Generate test cases from scenarios considering also inputs, conditions and expected results
  • Run the tests, find defects and fix them

That could take longer, but finally we will reach the quality goals, right?

NO!

Please consider this extra-scenario: what if we have too much technical debt. Let see some consequences. (used numbers are examples).

Scenarios –  If the requirements logic generates 100 scenarios, poor designed code could physically have 600 or more. How is that possible? It is pretty easy by accessing, for example, an undesired global context. If my scenario will access a global context (even only one global variable…), that will multiply the number of real test cases because these data are changed in an unexpected way (not specified in the requirements) by other flows. In fact, the global context will mix and multiplexes piece of functionalities that are not logically related. Duplicate code it is also a mighty enemy that create and multiply the test cases (“phantoms”, beyond the ones resulted from requirements). If we want to change a formula that was harcoded on each usage, how could really know a tester where was harcoded and how many times?

States – If the requirements logic suppose 50 states of the main entities & main logic, a poor designed code could physically have 500.  How is that possible? Again, that it is easy. For example, if the states transitions for one entity is not encapsulated, the code could be fragile enough to induce supplementary phantom states, because of poor written state transitions (one ways is to use only basic setters to realize these transitions).

Expected results – a poor design code will damage also the result display, log or persistence. The easiest way is to damage the displayed data: “catch” the errors and not display them, display a valid value for a null etc.

Let make a summary:

  • We have 400 official test cases for 100 scenarios and 50 states transitions and that cover the requirements
  • Physically, the poor designed system has more than 600 scenarios, 500 states transitions that will be covered by more than 2000 test cases

We will start testing – running 400 test case for a system that need 2000 test cases:

  • We will fix the defects that belong to the 400 official test cases
  • The testes will make some supplementary explorations and we will catch some of the defects from phantom test cases and phantom state transitions
  • A lot of defects will remain unobserved, mostly from those “undocumented” test cases

(Traditional) Wait!! The team – developers, testers and analyst will not discover the problem? We will test more and will fix more defects!

Based on experience, that will almost never happening! There is almost zero chance to generate all the “undocumented” test cases only by pure exploration: the tester has no clue about where are most of the hidden, phantom test cases.

(Traditional) There should be a way to solve that!

There is one: when a tester discovers some phantom test cases, when the expected results are damaged then that tester must report this quality problem: “we have test-damaging defects, please analyze them, and fix the root cause”.

Test and fix cannot protect you from test-damaging defects

(Traditional) That sound good enough!

It is not! It is too late to discover at testing that we have a such poor code, and a such poor build-in quality that will affect the tests itself and will cost too much.

Some conclusions

There is nothing wrong to generate in an orthodox manner the test cases from requirements. Just that you also physically implement these requirements, in order to make the generated test cases effective. The developer must not have the “liberty” to implement phantom scenarios, phantom states, to mix data and flows in a way that was not specified by the requirements.

We need to physically implement the requirement

It is not effective, inefficient, unethical and unprofessional to build a system that cannot be tested – where the real system test cases are much more that requirements-generated test cases (and are practically impossible to generate on testing time).

Traditional way of trying to get the quality mostly by test and fix it is many case a cognitive bias (inadequate logic), that bring a spiral of undesired results. The Martin Fowler Technical Debt Quadrant logic it is applicable in this case.

What could be done?

We need to reduce as much is possible the impedance between requirements specification and physical implementation. Some examples:

  • Physically “protect” the business. Separate the business aspects and do not duplicate
  • Basic design principles: do not duplicate, do not access global context
  • Physically “protect” the functional flows (separate: do not mix with other aspects)
  • Physically “protect” the logic of timing sequences (see Uncle Bob “Clean Coders” videos)
  • Physically “protect” the logical states transitions
  • Prevent and fix test-damaging defects

What about TDD?

TDD is in the list of things that help us to physically implement the requirements. The ultimate requirements are the test cases, were TDD will physically implement the test cases. The only major limit of TDD is that almost never will implement all the test cases.

The ultimate requirements are the test cases

… but we almost never implement all test cases in auto-tests.

Anyway, from industry experience, the examples from the above list of are needed also to enable the TDD.

New product and Minimum Viable Process with DAD

A live ecosystem

With a new product we will initiate and we must keep “alive” a full ecosystem (see previous post “A product … is not a product“) that involve:

  • business relationships
  • a viable process economics
  • sustainable technical solutions
  • a product team(s)
  • repeatable results
  • an evolving consumable solution and others.

What help me and my collaborators in some real cases were Disciplined Agile practices and some complementary ones such Clean Code and Clean Architecture.

“Temptations” and consequences

The first temptations on building new products were we have observed undesired consequences:

  • Copy-Paste Process: just repeat – without sufficient adaptions – the process from the existing products (more or less similar with the new one). The pressure could compromise more this process “re-usage”.
  • Ad hoc process:  If an existing process is not available, an ad hoc process is adopted without sufficient discipline and many process goals are neglected
  • “Normal” life-cycle: direct re-use a “normal” project life cycle, standard for the past projects/products
  • Ad hoc team: superficial building of product team, without sufficient collaboration and skills

The consequences could be very unpleasant:

  • A fragile product, where the poor design does not allow to reach the opportunities:  cannot deliver required changes, cannot offer a sufficient quality
  • The product is not consumable: not enough user level guidance, integration problems, performance problems
  • A significant mismatch between offered features and customers’ needs

 Different product – different ecosystem

We cannot re-use the process of a previous product as it is – the elements of the necessary ecosystem could vary more or less, and here are some elements that we found to be different:

  • The inherent complexity of business domain or of the solution
  • The customers & associated relationships
  • Derived from complexity: team skills, minimum viable good design
  • Required quality
  • Default performance
  • Others

As a first conclusion – we need to inspect and investigate the new context and adapt our process in order to build the proper ecosystem. Context investigation and adapting the process must be some permanent concerns, but the expected big deviation introduced by a new product must tell us that this is a special case to be considered.

 Risks, incertitude and opportunities

New product means a higher degree of risks, incertitude, but also possible opportunities. Risks and incertitude must be addressed in order to protect the opportunities and in the same time we need to keep the process adaptive enough to respond to these opportunities.

The main question is what practices and approaches we will need in a such context? We have found some that works.

Incertitude adapted life-cycle

For a new product, it is less likely to have a good enough initial envisioning of the requirements (and of the solution).  The DAD option of exploratory lean startup life-cycle propose a more adaptive approach where initial idea could be repeatedly adjusted after getting business domain feedback about what we incrementally build.

Choosing the right process options

As I have mentioned, the process “deviation” from what we know could be significant. Anyway, the process goals will be similar, but is possible to be needed to choose some different options. DAD main logic was built exactly on this idea. Beyond life-cycle approach we can choose various options for goals such: technical strategy, prioritization stakeholder’s needs, requirements elicitation methods, prove architecture early, validate release.

Some examples of options per goals described by DAD guidance:

  • Architectural spikes and/or end-to-end skeleton for proving the architecture early
  • Business values, Risks, dependencies as criteria for prioritization stakeholder’s needs
  • Coaching, mentoring, training, pair programming for Improving team members skills

When we begin a new product: best time to start adapting our options to the new context.

Effective practices

Building a new product it is difficult endeavor from many points of views: requirements clarification, solution design, (new) knowledge management. Here some key approaches and associated practices that we have found to be successful/helpful.

Effective/Efficient collaboration – apply Non-solo Development for modeling (Model Storming) and programming (Pair Programming). Active Stakeholders Participation.

  • Non-Solo Work it is critical for high difficult tasks related with initiating a new product
  • Because knowledge is just newly created we need to efficient/effective distribute that knowledge to the other team members and to the stakeholders

Opportunistic envisioning – Look Ahead

  • We cannot solve complexity and incertitude only with normal milestone-based forms of looking ahead – Inception Envisioning and Iteration Modeling. It was very useful to opportunistically use Look Ahead Modeling.

Knowledge evolution – Document Continuously, Document Late

  • Knowledge is just created – we have suffered when we forgot to capture, at least the essential – Document Continuously help us on this aspect
  • From time to time, we have extracted later some overview information (System Metaphor, Architecture Handbook), when our view and results has been stabilized – Document Late

Adaptive process –now is the best time to do that

  • The process was rather fluid and was defined/clarified as we advanced and saw what really works in the new context.
  • Good options were demonstrated by
  • Do Just barely good enough and refine not only your work, but also your process in rolling wave. Use events as architecture envisioning, look ahead modeling, iteration modeling to adapt your process

Minimal good design

When a new product was started, before and after the Minimum Viable Product, we were forced to keep change the product quick and with a sustainable pace. In order to “chase” the opportunities, we had to pay legacy technical debt and avoid as much is possible a new one. A minimal good design, it is necessary to deal with this continuous changes in the case of context complexity.  In this case, Refactoring,  Clean Code, Clean Architecture practices were the backbone for an agile, adaptive design and finally of an adaptive product.

Each distinct product will need a specific Minimal Good Design!

Some conclusions

DAD is built to offer context-adapting guidance. Using Disciplined Agile and outstanding practices as Clean Code and Clean Architecture it was be what we need to build a new product and to define the Minimum Viable Process necessary for that.

DAD and Agile Modeling practices are described here.

Bonus!

Before DevOps: Delivering more by delivering less

Undesired complexity give undesired deliveries

There are many aspects to consider related to DevOps and delivery, many strategies and (in the right context and the right approach) there are also some very useful tools. It is a complex issue and last thing that we want is to make that even harder because of undesired complexity. A poor and an un-appropriate design is the main cause of the problems.

Suppose that the main aspects of a product functionalities and design are the followings:

  • Functionalities: f1, f2, f3
  • Business rules: r1, r2
  • External frameworks, technologies, and drivers (APIs, hardware interfaces, others) e1, e2
  • External interfaces: i1, i2,
  • Technical mechanisms: t1, t2, t3..

Should stay together only what it changes and should be deliver together.

As generic rule, if one of this aspects it is changed, we do not want to affect, change and deliver others aspects. Below are some examples of anti-patterns.

Anti-pattern: Lack of functional cohesion

Just a start example: If was requested to deliver a change for the functionality f1, it is highly undesired to change also functionalities f3 and f4 only because f1, f3 and f4 are unnecessary coupled in the implementation. This coupling could mean: same function, same class or same package as opposite of using distinct design elements.

Anti-pattern: Non-DRY business rules

If the business rule r1 it is used in more functionalities (that have dedicated components) and that rule implementation is duplicated (multiplied) in every functionality, then a change in r1 will require change and delivery for all those functionalities and components.

Anti-pattern: Mixing functionalities with external communication

Suppose that f1 and f2 are using TCP sockets for communication with some external systems and we need to replace this type of communication, if the f1, f2 implementation is mixed with socket management aspects, then we need to change and deliver also f1 and f2.

Anti-pattern: Mixing functionalities with external interfaces

External interfaces could suppose handling of some specific external data structures and/or of some specific communication protocol. If we are mixing these parts with internal functionalities, with any change in the interface, we should change and deliver also some not related internal parts, .

Anti-pattern: Mixing functionalities with technical aspects

You need to protect the representation of the business inside the product from the changes related to technology. Where is the technology involved? Any I/O aspect that wrap the hardware: GUI, network, databases, file systems is strongly related to technologies and platforms. If, for example, you have a lot of functionality and business representation in the design of the GUI elements, any change in GUI technologies will affect the business representation inside the product, that should also massively changed and then delivered.

Anti-pattern: “Utils”

A symptom of a poor design that could cause undesired supplementary work on development and delivery are the “utils” packages, especially without clearly dedicated sub-packages. Examples:

  • “utils” package that mix together classes belong to different functional aspects
  • “utils” package that mix various technical aspects without having dedicated sub-packages

This kind of design suppose that we need to change, test and deliver the “utils” almost with any change of the product.

Anti-pattern: Dirty code

Breaking simplest Clean Code rules (similar with the ones that are used in refactoring) could cause undesired coupling and undesired increase of delivery scope. Some examples that could induce such problems:

  • Any duplicate/multiplied code
  • Breaking SRP principle
  • Global context data, global variables (breaking Law of Demeter)

Minimizing the need of change and delivery

We need to reduce the need of change and delivery only to the required ones. There are several practices and approaches that could avoid that problem (the examples presented above or similar ones):

  • Keep the code clean by writhing Clean Code first and refactor and pay the technical debt whenever it is necessary
  • Use functional cohesion as the main criteria of creating components and packages
  • Use Clean Architecture that propose a default, strategic separation of concerns
  • Extend XP engineering practices (Simple Design, Refactoring, TDD) with the ones from Agile Modeling and DAD
  • Respect Law of Demeter on any level of the design – do not use any global context
  • Make the products adaptive by keeping up-to-date with target business by often and early injection of feedback from that business.

You can argue that I already write about all these in a previous post “Roadmap to an Agile Design”. Yes, indeed, in order to deliver just what it is needed and avoid waste you must have an Adaptive Design, the main characteristic of an Agile Design.

In order to that you should be open  to all outstanding agile practices for design and do not be closed in the smaller universe of some very lightweight Agile methods. And remember that those lightweight methods are not created as full process methodologies, but as indications to build a customized process.

Imagine that all these problems are far worse if we have multiple variations and variants of the same product. The massive effort for any change and delivery will block the overall agility and responsiveness of the development team and massively reduce the overall economics for both development and customer side.

Use a strategic separation of concerns

Do not use any reference to any global context

%d bloggers like this: