Almost two months ago I went to Montreal to visit Greg Young and get some clarification on a few lesser-known points of his DDDD (CQRS+DDD+Event Sourcing) style of programming and preachment. He’s really great to talk to but it can sometimes be like sipping from the fire hose. This is a good thing.
One key point that Greg kept hammering on was his dislike of frameworks. Not all frameworks, per se, but just our general inclination as .NET developers to framework-ize everything. In many situations this creates additional cost and maintenance overhead when [gasp!] copy & paste would serve us much better. At the same time, there was one key area where Greg mentioned that a great deal of value could be extracted by creating a library of reusable code. That area was for the event store.
The event store serves as one of the foundational infrastructure components in a typical event sourcing-based architecture. Over a year ago I posted a potential relational database schema for creating an event store. That post did not go into much detail on how to build an event store. It simply outlined how we might overlay a typical event store on top of a traditional RDBMS.
During may three days with Greg I took nearly 50 pages of notes as he unloaded all the various facets of his experience with implementing CQRS with event sourcing. One area of special attention was that of the event store. One key benefit of having followed Greg’s methodologies closely for the last 18+ months is that I was able to extract a great deal more than one might normally get if he/she was hearing it for the first time.
With all of this background in mind, I went ahead and implemented a storage-engine agnostic CQRS event store.
Design Principles and Goals
I had several foundational principles when creating my event store.
Have no external dependencies.
Do not force the user of the code to implement any special interfaces such as IEvent.
Make the storage engine completely pluggable so one could easily be created for a relational DB or even a NoSQL engine, such as a document, graph, or object DB.
Accomplish all persistence in a single round trip.
Reference only one assembly through the magic of ILMerge.
Support optimistic concurrency to facilitate intelligent merging by client code.
Support up-conversion of events/snapshots to the latest version as we versionize events and snapshots.
Support any kind of serialization, e.g. JSON, BSON, Protocol Buffers, XML, etc.
As a quick way to get things moving and to make it easy to plug in, I have included implementations for three popular RDBMS engines—Microsoft SQL Server 2000 (or later), MySQL 5 (or later), and SQLite 3. I will also be implementing one for PostgreSQL in the near future. Basically all of these implementations work by utilizing IDbConnection and having slightly different SQL statements that are executed depending upon the “dialect” specified in the IoC wire-up code.
I also created all necessary DDL (CREATE TABLE) statements for each SQL implementation in order to establish the event store schema.
One critical advantage the library I created has over the various CQRS frameworks is that mine batches all of the events and sends them in a single trip to the database. Many of the current CQRS frameworks as of this writing have a loop for each event to be persisted which causes multiple round trips per commit, whereas my implementation can store about 1200 events at a time for SQL Server (at which point SQL balks because there’s too much on the payload) and several thousand for MySQL using the default MySQL .NET connector.
License, Docs, and Example Code
First off the license is MIT. Just take it and do what you want with it. Ideally if you do anything with it, I’d get a little bit of recognition, but other than that, do what you want.
I have enabled XML/Code documentation on the interfaces within the library. This should help to a certain extent with figuring out how to use the code.
One other area that still needs some help is that of some example code. I’ll be pushing a readme file so that github can show the sample code.
Notes from Greg on Document DBs
One quirk that Greg mentioned during my three days with him was concerning document databases. RavenDB being exempted because it has first-class support for event sourcing, most document DBs have no good way to link a series of events together. In those cases, Greg actually recommend persisting a snapshot along with the events to be committed at each commit. Basically as the aggregate is committed each time, a new snapshot would be persisted as a key part of the document along with the new events. In this way, our aggregate would be built off a snapshot rather than off events, but we would still have the events and could do some kind of MapReduce query to get at the underlying stream of events.
Have fun. Copy it. Tear it apart. Tell me why the API is terrible and should be improved. Create implementations for CouchDB and the like.