The batching of events by their source aggregates also clearly defines the scope and the lifetime of every projector, making it possible to apply several optimizations in the event projection process. However, it also requires that for creating projections combined from events of two or more aggregates (or aggregate types), it is necessary to create corresponding number of separate projectors (all of them working on the same read model). This is a certain compromise that was made in order to keep the framework architecture simpler and cleaner. Nevertheless, while it might seem to be something that makes writing these extra projections unnecessarily verbose, it makes sense from the perspective of write side consistency. Put it other words, because it is the aggregates themselves that define (in terms of DDD) the consistency boundaries, implying the system will always be modifying an aggregate at a time, the projectors will always receive events in batches corresponding to those aggregate modifications. Using the same reasoning, because the modifications of two different aggregates will always be independent and only eventually-consistent, any attempt of immediate consistency of read models across two or more aggregate would only be fictitious and in reality, unattainable.