I was working with some legacy data the other day when all of a sudden I realized that I needed to know exactly how old the data was. The problem? I had no way to get that information precisely. The specific scenario was I was working with phone numbers and addresses of customers. Knowing how old a piece of information—especially contact info—is critical because the older or more stale it is, the higher the potential for error. I could try to infer how old the information was by when the record was updated up or a few other similar kinds of audit elements, but the database schema didn't have any dates that were definitively attached to the updating of the address or phone number attributes, only the entire record as a whole.
This is another simple and oft-forgotten benefit of event sourcing—we can easily replay all events into a new model to project out according to some previously unknown business need, whereas in the typical, single read/write model approach without event sourcing, we have lost that information because it was never persisted, not to mention the loss of business intent.
The business requirement for tracking all changes using a quasi-temporal database was the original reason I got into event sourcing. The level of complexity for good temporal database design was (and is) astronomical to the point of destroying any business benefit through development overhead Conversely, event sourcing provides an extremely simple, elegant, and useful method of achieving the same result that even non-technical people can understand, plus the added benefit of replaying into new, unforeseen models.