• White Twitter Icon
  • White LinkedIn Icon

Radix Financial Software (PTY) LTD ©2019 All Rights Reserved 

Search
  • Ian Child

Micro-services Architectures with Domain Driven Design

Updated: May 31, 2019

Wikipedia defines DDD as :


An approach to software development for complex needs by connecting the implementation to an evolving model.The premise of domain-driven design is the following:

Placing the project's primary focus on the core domain and domain logic; initiating a creative collaboration between technical and domain experts to iteratively refine a conceptual model that addresses particular domain problems. basing complex designs on a model of the domain; As highlighted above the emphasis is on complex and evolving business needs. DDD is not applicable for simple business domains, even if a simple business domain is implemented with very complex technology. By way of example if a Tic Tac Toe game is implemented using a quantum computer the modelling of the game would not need DDD despite the fact that the implementation might require highly specialized knowledge. Financial services for a multitude of reasons falls into the category of a complex business and as such DDD can be very helpful in breaking down and modelling complexity within such a business. Using DDD however requires that both business and technical resources become familiar with the concepts embodied in the technique.

The core domain and domain logic is split into autonomous views or aspects of the core domain into bounded contexts. The key here is that the bounded context is or can be considered autonomous and could be deployed separately to other aspects of the core domain. The team uses a model driven approach through creative collaboration between technical and domain experts. In order to facilitate collaboration between technical experts and domain experts a common lingua or what DDD refers to as a ubiquitous language is developed and documented. This ubiquitous language encompasses glossaries, acronyms, linguistic nuances and all the traits that normally become part of shared meaning and understanding. Teams working on different bounded contexts may develop ubiquitous language specific to the context within which they are working. Clearly the collection of bounded contexts make up the entire core domain and although a bounded context by definition must be deployable on its own, collaboration between bounded contexts is needed for the core domain to operate. One of the primary benefits of DDD is that by dividing the core domain into bounded contexts teams are free to go their own way within a bounded context. However no bounded context is an island and bounded context themselves need to collaborate.

Collaborations between bounded contexts are formalised in DDD through the use of context maps. Where the semantics between two bounded contexts are radically different the context mapping may be implemented through what is referred to as an anti-corruption layer which is effectively a facade which translates between one set of semantics and another. From a model driven design point of view, bounded contexts should be modelled primarily from the point of view of the actions (use cases) a business needs to accomplish in order to execute on its business model. Modelling therefore becomes more preoccupied with verbs rather than nouns (top down design). The modelling of verbs in the business typically results in the commands the software must ultimately be capable executing. Executing a command typically gives rise to a state change in some business entity (more on entities below) and this state change is/may be recorded as a domain event. A domain event is an event that domain experts care about. As alluded to above, business entities normally carry the state of the business model, and events, are what give rise to those state changes. Entities are therefore more preoccupied with modelling the nouns of the business domain (customer, account, product etc.). Translating commands, events and entities into software is typically achieved through the creation of object oriented models, which implement methods (commands) and attributes which define the state of an entity (hence the attributes of an entity combine to form the state of the entity). Since DDD is concerned with preserving the autonomous nature of a bounded context, class hierarchies within a bounded context, are modelled as aggregates with a root class at the top of the hierarchy.

This approach ensures that no reference to the entities within an aggregate can be obtained other than through the root class. The root class is also responsible for making sure transactions which are executed within a bounded context are ACID since data consistency within a bounded context is paramount. This brings us to persistence and repositories. Transactions and state are typically persisted via some sort of database technology either in a relational database or a nosql database. Within DDD a bounded context state and transactions are persisted to a single database dedicated to that bounded context. Whilst there are many development patterns object oriented design patterns dominate with functional programming coming more and more to the fore. Bounded Contexts within a Microservices Architecture A microservices architecture can be used without using DDD and likewise one may use DDD without implementing a microservices architecture. DDD does however map very well to the concepts of a microservices architecture since each bounded context can be aligned with a microservice. The business logic is encapsulated in services 1 to n with each being an implementation of a bounded context. Security is represented as a cross cutting concern which is implemented centrally but woven throughout the services fabric, there are other cross cutting concerns such as application logging, access auditing etc. On the OO front the main persistence method has been to use an object relational mapper (ORM) which translates the class hierarchy in the model to a set of tables in a relational database. Using a pure OO approach the aggregate state is mutable and domain events of business interest are typically mapped to an event table; these events are included in the acid transaction. Once the event is committed there are a number of techniques to ensure it gets written to the event bus for consumption by other bounded contexts. One of the big advantages of using this style of programming to implement microservices is that the relational databases handles invariant rules within a transaction by providing ACID transactions. Scaling a bounded context implemented using the imperative programming model described above typically requires scaling the relational database. For most applications for most businesses relational databases will scale adequately for the needs of the business. However should the business require massive scale and elasticity ie the ability to scale on demand noSql databases become more applicable. The trade off here is most noSql databases are eventually consistent and if the business problem at hand requires consistency above all else which is typical in a financial services application other strategies are required. Another approach to implement functionality within a bounded context is to use the event sourcing pattern. The fundamental idea of Event Sourcing is that of ensuring every change to the state of an aggregate is captured in an event object, and that these event objects are themselves stored in the sequence they were applied for the same lifetime as the application state itself. Fowler notes: “the most obvious thing we've gained by using Event Sourcing is that we now have a log of all the changes. The key to Event Sourcing is that we guarantee that all changes to the domain objects are initiated by the event objects. This leads to a number of facilities that can be built on top of the event log: Complete Rebuild: We can discard the application state completely and rebuild it by re-running the events from the event log on an empty application. Temporal Query: We can determine the application state at any point in time. Notionally we do this by starting with a blank state and rerunning the events up to a particular time or event. We can take this further by considering multiple time-lines (analogous to branching in a version control system). Event Replay: If we find a past event was incorrect, we can compute the consequences by reversing it and later events and then replaying the new event and later events. (Or indeed by throwing away the application state and replaying all events with the correct event in sequence.) The same technique can handle events received in the wrong sequence - a common problem with systems that communicate with asynchronous messaging.” Event sourcing is extremely useful in recording financial transactions since the notion of an immutable store is almost identical to the notion of a financial journal. The balance on an account at any point int time is the sum of all journal transactions posted to that account. The event log therefore becomes the audit log for the account Event sourcing and functional programming can go hand in hand since the state of any aggregate is simply a function of all previous events. Obviously this can become a problem as the number of events grows. An approach to solving this is to snapshot aggregate state from time to time and use the snapshot as the starting state to calculate current state.

The event sourcing pattern is also often applied in conjunction with the CQRS pattern where the write side (“the events”) are separated from the read side. By way of example on the write side events may be stored in a message store such as Kafka or RabbitMQ. The read side subscribe to these events and they are projected into an optimized data model for reading e.g. a relational database which may or may not be normalized. This allows for the separation of read queries and consequently the read side may be scaled independently from the write side. Since the read side will be eventually consistent this pattern is only applicable where the use case can tolerate latency between the read and write side. Alternatively where consistency is imperative and there is a requirement to scale elastically data can be sharded to by some convention so that within a shard the data is consistent. The implementation of a microservices architecture approach brings with it many benefits by way of example: Independent Deployment Teams can work at different paces Different technology stacks can be used in each bounded context allowing developers to use the best tool for the job. Different parts of the domain can be scaled independently. The approach does have one major drawback and that is it introduces technical complexity, both from an infrastructure and development point of view. From an infrastructure point of view deployment, monitoring, release management all become significantly more complex as these microservices can be deployed separately across hybrid infrastructures using different technology stacks The major issue introduced from a development point of view is data consistency (back to the CAP theorem) when the context of a “transaction” spans bounded contexts. This type of transaction is referred to in the DDD lexicon as a saga. The Saga pattern describes how to solve distributed (business) transactions without two-phase-commit as this does not scale in distributed systems. The basic idea is to break the overall transaction into multiple steps or activities. Only the steps internally can be performed in atomic transactions but the overall consistency is taken care of by the Saga. The Saga has the responsibility to either get the overall business transaction completed or to leave the system in a known termination state. So in case of errors a business rollback procedure is applied which occurs by calling compensation steps or activities in reverse order. Saga execution controllers can be implemented in two ways. Firstly using the process orchestration pattern or secondly using a choreographed approach. One approach to using the process orchestration pattern is to use a lightweight BPMN engine, using this approach allows for the modelling of compensating transactions within the BPMN modelling tool. Whether one use a BPMN engine to implement the process manager pattern or whether one writes a custom SEC controller both require process state to be maintained and hence the pattern may be criticized for not being able to scale. The choreographed approach can be implemented using the routing slip pattern. The activities are grouped in a composite job (routing slip) that's handed along the activity chain. If you want, you can sign/encrypt the routing slip items so that they can only be understood and manipulated by the intended receiver. When an activity completes, it adds a record of the completion to the routing slip along with information on where its compensating operation can be reached (e.g. via a Queue). When an activity fails, it cleans up locally and then sends the routing slip backwards to the last completed activity's compensation address to unwind the transaction outcome.


20 views