I'm currently reading "Implementing Domain-Driven Design" while going through the code samples, but I'm having trouble modeling aggregates that stores a huge list of value objects. For example, here's one of the subdomains dealt with the book (adapted for Rust):
struct Person { id: PersonId, } struct GroupMember { person_id: PersonId, joined: DateTime<Utc>, } struct Group { id: GroupId, members: HashSet<GroupMember>, }
Person
and Group
here is an aggregate and GroupMember
is a value object inside the Group
aggregate.
What troubles me is that members
here can represent a really big list, so loading them into memory every time can really hurt performance.
I tried to look into any prior art for dealing with this problem, but I can't find any information on it. I'm not very familiar with Java, but it looks like that a lot of DDD examples uses some kind of lazy-loading mechanism with some Java ORM. But I'm using a raw SQL library, so that's not an option for me.
The way I deal with it right now is that I'm extending the Group
aggregate's repository to include methods for fetching the GroupMember
value object separately:
trait GroupRepository { fn group_member_of_group_id(group_id: GroupId, limit: u32) -> Vec<GroupMember>; fn add_group_member(group_id: GroupId, member: GroupMember); }
The limit
is here so that you can fetch them gradually, without having to loads tens of thousands of them straight to memory.
Is this a valid approach? Has a concept like this been explored before in DDD? Some advice would be appreciated.