Category Archives: Practices

DRY principle beyond small talks

Duplicated code? Wrong term!

“Duplicated code” – that it is a very often used term. Sorry, but at least in my experience, I have refactored thousands of pieces of codes where this term is completely unappropriate: a code that it is  repeated 5 times, 20 times or 50 times is MULTIPLIED not duplicated.

Using “duplicate” it is in fact a misinformation about the magnitude and severity of the problem. Consider, for example, a business rule multiplied with an average factor of 40 and a system with hundreds or more represented business rules.

DRY—Don’t Repeat Yourself: Every piece of knowledge must have a single, unambiguous, authoritative representation within a system. (The Pragmatic Programmer by Andrew Hunt and David Thomas)

DRY dogma versus DRY spirit

A pragmatic approach will consider that many software systems are far for respecting good enough this principle or other principles. Anyway, if you want to control or fix the design it is better to focus first to what it is matter. We can found a lot of “duplicated”/multiplied code, but in some cases the consequences are more severe. Here are some example that you should never multiply:

  • Business rules and algorithms
  • Functionalities
  • Entities
  • Technical mechanism

More: if you have constraint to duplicate something – for example same business rule in java and DB code – you should have a clear map of these duplications that must keep synchronized.

Some simple questions

Just a simple question for code multipliers: having already a business rules multiplied of 50 times, what are the chances to find and change all of them if that rule it is changed? 5%? That is a programmatic bug generator.

Another question: what is the effort necessary for changing the same business rule in 50 places, where the most time is spent on finding these places? That is a programmatic destruction of the software economics

Another question: how it is possible to persist in such mistakes for years?

The answer could be a quote of “Land of confusion” Genesis song: “This Is the World We Live In“. Or maybe the song title it is the best answer.

Effects misunderstanding

The effects of multiplied code is in most of the cases deeply misunderstood.

We have some poor design, but we will test and will work

Really?

Too often there is this kind of bug report: if I execute the sequence a, b , c, d .. that feature works and if I execute the sequence b, a, c , d  the same feature does not work.  In many case does not work because is not the same feature! That mean not the same code. The main problem is that the testers usually checks the first sequence and will not check also the second one because they suppose that should be executed the same function and same code.

A multiplied code has a high probability to:

  • not be tested for all occurences
  • not be changed for all occurrences
  • be melted in each occurrence with other pieces of codes and will highly increase the fragility of the product

More questions

The reverse question: (not if you have multiplied code, but) You have at least some examples of how to not multiply business rules, algorithms, functionalities, mechanisms?

Recap

  • It is not duplicated code, it is multiplied!
  • DRY first what it is matter most
  • Multiplied code will (almost) never be fully tested
  • Multiplied code will (almost) never be fully changed
  • Multiplied code compromise the software economics

Before DevOps: Delivering more by delivering less

Undesired complexity give undesired deliveries

There are many aspects to consider related to DevOps and delivery, many strategies and (in the right context and the right approach) there are also some very useful tools. It is a complex issue and last thing that we want is to make that even harder because of undesired complexity. A poor and an un-appropriate design is the main cause of the problems.

Suppose that the main aspects of a product functionalities and design are the followings:

  • Functionalities: f1, f2, f3
  • Business rules: r1, r2
  • External frameworks, technologies, and drivers (APIs, hardware interfaces, others) e1, e2
  • External interfaces: i1, i2,
  • Technical mechanisms: t1, t2, t3..

Should stay together only what it changes and should be deliver together.

As generic rule, if one of this aspects it is changed, we do not want to affect, change and deliver others aspects. Below are some examples of anti-patterns.

Anti-pattern: Lack of functional cohesion

Just a start example: If was requested to deliver a change for the functionality f1, it is highly undesired to change also functionalities f3 and f4 only because f1, f3 and f4 are unnecessary coupled in the implementation. This coupling could mean: same function, same class or same package as opposite of using distinct design elements.

Anti-pattern: Non-DRY business rules

If the business rule r1 it is used in more functionalities (that have dedicated components) and that rule implementation is duplicated (multiplied) in every functionality, then a change in r1 will require change and delivery for all those functionalities and components.

Anti-pattern: Mixing functionalities with external communication

Suppose that f1 and f2 are using TCP sockets for communication with some external systems and we need to replace this type of communication, if the f1, f2 implementation is mixed with socket management aspects, then we need to change and deliver also f1 and f2.

Anti-pattern: Mixing functionalities with external interfaces

External interfaces could suppose handling of some specific external data structures and/or of some specific communication protocol. If we are mixing these parts with internal functionalities, with any change in the interface, we should change and deliver also some not related internal parts, .

Anti-pattern: Mixing functionalities with technical aspects

You need to protect the representation of the business inside the product from the changes related to technology. Where is the technology involved? Any I/O aspect that wrap the hardware: GUI, network, databases, file systems is strongly related to technologies and platforms. If, for example, you have a lot of functionality and business representation in the design of the GUI elements, any change in GUI technologies will affect the business representation inside the product, that should also massively changed and then delivered.

Anti-pattern: “Utils”

A symptom of a poor design that could cause undesired supplementary work on development and delivery are the “utils” packages, especially without clearly dedicated sub-packages. Examples:

  • “utils” package that mix together classes belong to different functional aspects
  • “utils” package that mix various technical aspects without having dedicated sub-packages

This kind of design suppose that we need to change, test and deliver the “utils” almost with any change of the product.

Anti-pattern: Dirty code

Breaking simplest Clean Code rules (similar with the ones that are used in refactoring) could cause undesired coupling and undesired increase of delivery scope. Some examples that could induce such problems:

  • Any duplicate/multiplied code
  • Breaking SRP principle
  • Global context data, global variables (breaking Law of Demeter)

Minimizing the need of change and delivery

We need to reduce the need of change and delivery only to the required ones. There are several practices and approaches that could avoid that problem (the examples presented above or similar ones):

  • Keep the code clean by writhing Clean Code first and refactor and pay the technical debt whenever it is necessary
  • Use functional cohesion as the main criteria of creating components and packages
  • Use Clean Architecture that propose a default, strategic separation of concerns
  • Extend XP engineering practices (Simple Design, Refactoring, TDD) with the ones from Agile Modeling and DAD
  • Respect Law of Demeter on any level of the design – do not use any global context
  • Make the products adaptive by keeping up-to-date with target business by often and early injection of feedback from that business.

You can argue that I already write about all these in a previous post “Roadmap to an Agile Design”. Yes, indeed, in order to deliver just what it is needed and avoid waste you must have an Adaptive Design, the main characteristic of an Agile Design.

In order to that you should be open  to all outstanding agile practices for design and do not be closed in the smaller universe of some very lightweight Agile methods. And remember that those lightweight methods are not created as full process methodologies, but as indications to build a customized process.

Imagine that all these problems are far worse if we have multiple variations and variants of the same product. The massive effort for any change and delivery will block the overall agility and responsiveness of the development team and massively reduce the overall economics for both development and customer side.

Use a strategic separation of concerns

Do not use any reference to any global context

JIT – Just in time and Software Development

(See also Part 2 – Two dimensions: Just in time and Envisioning)

JIT – solution for incertitude and complexity

Driven forces that introduces JIT Life-cycle in software development

  • Business side: often changes – it is too complex to perform (too much) ahead requirements gathering
  • Development side: software solutions are mostly design (instead of production) it is too complex to manage big chunks

As a consequence of the degree of incertitude and complexity for both requirements and solution, the life-cycle (planning) that suit better will have a JIT model. Agile development has adopted from the start such approach in its principles and practices: often and small releases, iterative development.

JIT approach it is a solution for dealing with incertitude and complexity.

JIT approach it is a solution for dealing with incertitude and complexity. It is similar with the mathematical approach to solve non-linear problems: feedback based approaches (control theory).  The main issue is that you cannot compute (in mathematics) something that is too complex. In software development that mean you cannot envision too much requirements, solution and plan because of incertitude and complexity.

You cannot “compute” something that is too complex

Agile is one of the development approaches that already use JIT for more aspects. We can observe that XP that use “small releases” approach, use also “simple design” principle/practice – they do not want to make guesses about possible solutions aspects, required by possible future change request.

Let reformulate some JIT aspects:

  • do not make guesses about incertitude (what is difficult of impossible to clarify)
  • do not try to “compute” too complex problems

Do not make guesses about incertitude

If these principles are not follow, we will have the same problems as in mathematics: huge deviations of the solution for small changes in the inputs. Translating to the software development that mean huge waste.

JIT and Agile

Some Agile principles and practices that already use JIT approach:

  • Responding to change over following a plan” (Agile Manifesto – value)
  • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.” (Agile Manifesto – principle)
  • Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale” (Agile Manifesto – principle)
  • “Business people and developers must work together daily throughout the project.” (Agile Manifesto – principle)
  • Make frequent small releases.” (XP rule)
  • No functionality is added early.” (XP rule)
  • Simple design (XP Practice)
  • Model Storming (Agile Modeling / DAD – Disciplined Agile Delivery practice)
  • Document late, document continuously (Agile Modeling / DAD practice)
  • Active stakeholder participation (Agile Modeling / DAD practice)
  • Just Barely Good Enough (Agile Modeling / DAD practice)
  • Explicit support for JIT based life-cycles: Agile, Lean, Exploratory  (DAD – Disciplined Agile Delivery)
  • Inspect and Adapt (Scrum principle)

JIT – main difference versus manufacturing

We need to deal with the main difference versus manufacturing: JIT design versus JIT production. In manufacturing we repeat the solution (design) from the last life-cycle and in software development we need to find a new solution for the newer requirements (metaphor: the market request every time car with a new design). The major problem here is to integrate the previous simple design with the current simple design (there are not just additive). We need that:

  • The existent design must be easy to extend
  • Integration of “next” design must be quick, easy and clean

I have described a solution for this problem in a previous post, a solution that re-arrange some already know thinks from XP and Refactoring (as it was defined in the Martin Fowler book) –  an Adaptive Design based on this rules:

  • Use simple and clean design and then adapt for new changes (example of adapt “tool”: refactoring)
  • Use design practices that increase the adaptability (Refactoring, TDD, Clean Code, Clean Architecture)

JIT production from manufacturing it is based on responsiveness of a highly automated production. JIT design from software development it is more difficult to be automated, but we need to find that solution for responsiveness – it is mandatory to have an Adaptive Design.

Summary

  • JIT approach it is a solution for incertitude and complexity, that it is validated also in mathematics
  • Software development main problems are related to incertitude and complexity, that mean JIT approach could be useful in various ways
  • JIT rules: do not make guesses about incertitude, do not try to “compute” what it is too complex
  • There are many Agile values, principles and practice that are based on JIT approach
  • JIT Design require an Adaptive Design

“Sending a probe”: alternative to stubs, mocks and service virtualization

Picard: Data, prepare a class one probe. Set sensors for maximum scan. I want every meter of Nelvana Three monitored. –  Star Trek The Next Generation, The Defector

Stubs – “used commonly as placeholders for implementation of a known interface, where the interface is finalized/known but the implementation is not yet known/finalized.”

Mock objects – “are simulated objects that mimic the behavior of real objects in controlled ways”

Service virtualization – “emulates only the behavior of the specific dependent components that developers or testers need to exercise in order to complete their end-to-end transactions. Rather than virtualizing entire systems, it virtualizes only specific slices of dependent behavior critical to the execution of development and testing tasks.”

NEW  – “Probe” –   An isolated feature/set of features (not just a mimic!) of a system, enhanced with testing support: flexible, configurable data and commands input  and enhanced evaluation/validation output.. Could be used for early integration tests for reducing risks and provide useful feedback. The probe can simulate different scenarios of using a “real”  feature sent in the remote environment and can send back useful feedback. The usage (and the integration) it is simulated and not the functionality!

The probe could be designed to integrate with other systems or to be standalone and just “explore” the environment (that rather mean integration with infrastructure systems) Stand alone probes, if are carefully used, can gather data also from production environment, without interfering with real applications and functionality.

Simple examples:

  • send a probe with the logging related feature, if for example the logging should use remote unavailable web (or others) services (available only in the integration environment and not available remote)
  • send a probe with a feature that send emails in integration environment specific context

Example with stubs, mocks and service virtualization

2
Description

  • we need to integrate our system, that shall contain features from A to L, with an external system
  • when features sets ABC and EFG will be ready, we want to start integration test and we will use stubs and mocks for features J and K, and service virtualization for feature L

Example with probes

ProbeFrom practical experience, I can tell that using “probes” you can get flexibility similar with unit testing in the integration environment, and for integration tests.

Description

  • we need to make some very early integration tests only for feature A (that it is realized also very early) because it is important and its integration suppose high risks and high incertitude
  • we will “send” the feature A realization in the integration environment and we will get a much feedback it is possible. For this purpose, feature A it is “decorated” with some facilities for: accepting flexible inputs, enhanced outputs/feedback, with others for adapt, if is necessary, to some external system and possible others stuff
  • we can run more test scenarios for feature A in target integration environment and gather the useful feedback
  • important: it is easier to change the probe specific decorations and to get quickly more other feedback then re-deploying more features together with too many needed stubs/mocks or services virtualizations.

The main trick it is the decoupling: we are isolating the test of <Feature A, Integration environment> with the benefits of getting from “decorations” more flexibility on tests (scenarios, input data) and a larger feedback channel. If the text context it is <Features A-B-C-D, Integration environment> we do not have these benefits for one single feature. The effort for building and using test cases with different usage scenarios and input data for only one feature it is much bigger in the last case.

Warnings:

  • “probe” testings are exploratory integration tests, for discovering problems or for investigating risks
  • “probe” based testing cannot be done if the design is not based on separation of concerns, SRP decoupling, functional cohesion and other similar  design principles.

One of the main concerns for Agile Architecture and Agile at scale is to perform the integration tests first / integration tests early. “Probe” based integration offer a flexible, opportunistic approach that could be included also in the spike solution category from XP.

Picard: Oh, it’s me, isn’t it? I’m the someone. I’m the one it finds! That’s what this launching is, a probe that finds me in the future. – “Star Trek: The Next Generation”, The Inner Light

%d bloggers like this: