How to pass a list of objects in the command?


I would like to create some way to update the data in our application’s database when it’s already running on the server. The data is in the JSON format and contains of 1000+ entries. I am thinking about creating a command for that purpose which would handle the JSON data in a way that it would check the presence of each of the objects in the database and update it accordingly - create if it doesn’t exist, update if it exist, delete if it exists, but it’s not in the JSON file (each would fire a separate command). The problem is that I don’t really know how to pass the data to the command. As far as I know, concepts could be only of primitive type, so creating something like:

public class DataList : ConceptAs<List<DataItem>>
        public static implicit operator DataList(List<DataItem> value)
            return new DataList{Value = value};

and creating command with property of DataList type is not possible.

One solution would be to set generic type of ConceptAs to string and pass the JSON data as a string and then deserialize it to the desired object. But this feels a bit hacky.
Another one would be to just execute one unified command in a loop for each of the objects and handle the update logic there. But then I feel like I would break a Single Responsibility rule as one command would be in charge of creating, updating and deleting objects in the database.

Could you please give me some guidance on how to deal with this kind of scenario?

1 Like

Let’s take the technical issue first:

Concept is only for primitives.

Not everything on a command has to be a concept. . Concepts are only to give more domain meaning to your primitives. We have another structure Value if the thing follows value semantics rather than entity semantics but I’m guessing in your case it does have an identity in which case it’s just a class with an id and properties.

It’s perfectly acceptable to have an array / enumerable on a command. If you have wrapped this up into a class, then you can just put that class on your command. The only thing is that it has to be serializable to and from json. You might need to provide a serializer if it doesn’t have a default constructor / public setter.


What you are suggesting here doesn’t make much sense in terms of the modelling of aggregates and commands.

First, there isn’t a database. There is an event store with events and a read database that has persisted the projected state of these events.

An aggregate is a transaction boundary that is designed to maintain your invariant business rules. You want to make that as small as possible (for performance and concurrency reasons) while still maintaining your rules. A command is a specific transaction, it either succeeds of fails as a whole. It only operates on one aggregate.

A command that operates across multiple aggregates is not possible. You cannot maintain transactional consistency which is the very definition of the aggregate / command combination. If you do need to maintain transactional consistency, you’d have to model your aggregates to achieve this.

Of course, you can use the infrastructure of the command / command endpoint to send a message to the server with your data. Within your command handler you can execute commands against the specific aggregates. However, the bulk command will not be transactional. The individual commands that you fire off will be transactional. Remember also that you cannot return anything on a command result other than the success flag. That means you cannot return what happened on the individual commands on the bulk command result.

This is telling you that the bulk command isn’t actually a command. You can just add another endpoint that allows you to send this bulk data, performs the individual commands and returns the results. If it’s an MVC app, you can add a controller and just do what you want there.

But in doing this, you are saying that you are just doing crud operations (which is exactly the create, update, delete operations you are talking about). How do you know if these things exist, need updated? You could model another aggregate that tracks what things have been created (DataList and DataItem is again crud, not a rich domain), deleted and potentially what needs updated but this would have to be eventually consistent from the aggregates that represent the truth of the state of the “DataItem”.

Even this isn’t a good solution. If you have a json file that explains the state of the system, then the state is not in your application / in your event store. You should model this. If you turn your json file into a steam of events, subscribe to that stream from your bounded context and just write these to your read models. There is no need to go through any aggregate or issue any command. You don’t own these things any more. You are a passive recipient of the truth from another system.

If, for illustrations purposes, we say that it’s an order. In your system, you do create an order request and then send it to the external system. You’re system only owns the order request, not the order. You model your request as an aggregate and the order as a read model that is updated from events from the real order management system. You would probably model the request as a single message to the real order system. Once the message is sent, you can’t do anything else to that order request. If you want to update the order that was created, it’s a new request.

You need to think about where the “truth” lies and model accordingly. If you think of an event sourced system as a database and want to perform crud operations on it, you will struggle.


Thank you for the response and such an in-depth analysis of my problem.

I’ve managed to add an array of my data objects into a command by creating a wrapper class as you suggested. However, I was not able to place it in the Concepts project. This is where we keep our model classes in general, even if they’re not concepts in a framework’s sense (not sure if that’s correct?). I had to place the wrapper class and a data model in the Events project, otherwise Dolittle won’t let me build the solution.

When it comes to modelling - right now, ‘truth’ lies in the json file that we include in the application source and from which we take the data that we pass to the ‘CreateDataItem’ commands in a loop to populate our read model database on the startup of the application. This is rather temporary solution as all this data would be fetched from some external API at some point. For now, we wanted just a simple solution to update the data and be able to reuse the data format that we already have.
However, even with the data coming from the API, our approach wouldn’t be suitable for the event-sourced application as far as I understand you correctly. I think that subscribing to the stream of events would fit in this situation (as we do not own the data in either case), but how could one implement that? Should it be some kind of event processor listening to the e.g. DataItemUpdated event and updating it in the read model db accordingly?
And how should the stream of events be implemented? How could we send the events without firing a command? Could you please explain this solution a little bit more?

Have a look at the sample here:

The “Glance” bounded context is subscribing to events in the “Banking” bounded context.

In Glance, look at the Events.Banking project and look at the configuration in Core for event-horizons.json

List the events from banking (using the artifact identifier found in the artifacts.json config file in Banking) that you wish to receive.

If you have your “truth” in the json file, I would create a bounded context to act as a proxy for the real truth. It can get the json file and the connection string to the read models in the other (this is obviously not something you would do for real) and it can use the combination of the db and the json file to decide which events to send to get you to the correct state.

I would try and make these events very coarse. Imagine two events, BlahChanged (where you treat this as an Insert or Update) and BlahDeleted (where you delete it). In your real implementation, you are going to have a clear distinction between private (domain) events and public (integration) events.