DDDD: Eric Evans Interviews Greg Young

[UPDATE: Well this is embarrassing. This post is starting to get good amount of traffic and is getting listed on daily link blogs such as Dew Drop. Unlike the other posts in my DDDD series, this particular post was supposed to serve more as a reminder to me, rather than a polished set of coherent and sequential thoughts. It probably got ranked fairly high because it's so keyword dense.]

I have watched the InfoQ video of Eric Evans interviewing Greg Young several times. What follows below could be considered the "minutes" of the interview. I'm mostly recording the things that stuck out to me from what both Greg and Eric said. The thoughts aren't supposed to be coherent out of context, but as you watch the video you may see these points jump out.

  • When a OrderCreateMessage is received (sounds like a command), the system will locate the symbol for the order from on the message. Then the order will be added to the appropriate order book. At which point the order book will sit there and wait.
  • [Question to self:] How is the "context" of the original command—it's intent--maintained as corresponding events are generated as a result of the command? ANSWER: Newly generated events are specific to the commands that resulted in their creation—e.g. RemoveTradedVolumeMessage command generates a VolumeTradedMessage—even though several types of messages may have the same effect on the object.
  • Changes always appear in groupings.
  • The framework underneath the state transition messages allows them to run either a pipeline or a peer-to-peer pub/sub pattern.
  • The domain is broken down into bounded contexts. Those contexts are asynchronously mapped to each other instead of coming back to the root domain.
  • The decision to use a pipeline vs. a pub/sub model has to do with the percentage of messages a particular bounded context cares about. If it cares about the majority, use a pipeline.
  • The anti-corruption layer lives on the receiving bounded context.
  • State transitions are part of the core domain—or perhaps a shared kernel. The receiving bounded context will only ever work with those messages (as transformed by the anti-corruption layer) rather than the domain objects from the other bounded context.
  • Pub/sub works so well because the receiving context has no idea where it's getting the data from—this seems to imply that in a pipeline, the receiving context does know where it's getting the data from.
  • Distribution of the aggregates (partitioning) must be governed in a very specific way—in their scenario it's to ensure that no type stock symbols live on the same machine in their mirrored cluster.
  • Understanding of the domain is critical to the correct partitioning of the domain across different servers, e.g. keeping stock symbols in different mirrored nodes.
  • Objects are responsible for tracking their own state changes, e.g. "We say to the order: ‘Give me your state changes.' The order will say, ‘This is what has changed since I've been opened.'"
  • We can take those state changes from the repository and push it to a pipeline or directly process it to a database.
  • The repository is still responsible for knowing what to do with state changes.
  • One repository per aggregate root (normal DDD).
  • The call for state changes to the aggregate root causes the root to interrogate all of the children for their state changes, which becomes a single Unit of Work. [The child entities could potentially use the observer pattern to notify the aggregate root of changes.]
  • "We have found in pretty much everything we have done that your partitioning boundaries are almost always on your aggregate roots." This appears to be partitioning of the bounded context across multiple servers such that the same aggregate root doesn't live on two different servers simultaneously (excepting the mirroring scenario as mentioned above).
  • You can separate data models on the aggregate roots, e.g. one DB instance per aggregate root.
  • DDD Book: Aggregate is a transaction boundary, but Evans has encountered the aggregate roots make great boundaries for partitioning AND distribution.
  • [This was a bit confusing for me:] "The other thing that we have noticed with aggregate roots is whether or not something lives in an aggregate root is actually me explicitly saying that it should belong to the same data model as the aggregate root. Often times we have shared data which is shared between many aggregate roots which we will deliberately put a service over and have the aggregate root actually talk to the service when it needs to. And the reason that we do this is so that in our back-end data model, we can take that separate piece of data that's being used in many aggregate roots and actually pull it out into its own data store which is shared between them. We then use a database per aggregate root or a set of databases which are partitioned on the aggregate root."
  • The anti-corruption layer receives a message, transforms it and publishes the new, transformed message to its own bounded context.