First Steps¶
When working with event sourcing, the database is more than just a place to store data – it becomes the foundation for how your system records, replays, and reacts to change. Instead of keeping only the latest state, you capture a chronological history of facts, which can later be reconstructed, queried, or observed in real time.
To make this tangible, it is best to experience it hands-on. The following walkthrough is designed as a short 5-minute introduction: you will run an event store, write your first event, read it back, and finally observe how new events flow into the system.
This is not a comprehensive guide, but it gives you a practical impression of what working with events feels like. For deeper explanations and advanced scenarios, links to the full documentation are provided along the way.
Run an Event Store¶
For this introduction we will use EventSourcingDB, a purpose-built event store that can be started locally with a single Docker command. This setup is minimal, yet sufficient to see the basic concepts in motion:
docker run -it -p 3000:3000 \
thenativeweb/eventsourcingdb run \
--api-token=secret \
--data-directory-temporary \
--http-enabled \
--https-enabled=false \
--with-ui
Development Setup Only
The configuration above is intentionally simplified. It is appropriate for local evaluation, but not for production use. For hardening and operational guidance, see Managing Configuration and Secrets.
Once the container is running, open http://localhost:3000 to access the management UI, or verify the API with a simple request:
If you prefer pinned versions, binaries, or Kubernetes deployments, see Installing EventSourcingDB and Running EventSourcingDB in the documentation.
Write Your First Event¶
At the heart of event sourcing are events: small, immutable facts about what has happened. To illustrate this, consider a library. When a library has acquired a new book, that is not just a change of state – it is a fact worth recording. Let's model this as an event:
curl \
-i \
-X POST \
-H "authorization: Bearer secret" \
-H "content-type: application/json" \
-d "{
\"events\": [
{
\"source\": \"https://library.eventsourcingdb.io\",
\"subject\": \"/books/42\",
\"type\": \"io.eventsourcingdb.library.book-acquired\",
\"data\": {
\"title\": \"2001 – A Space Odyssey\",
\"author\": \"Arthur C. Clarke\",
\"isbn\": \"978-0756906788\"
}
}
]
}" \
http://localhost:3000/api/v1/write-events
What you submit is called an event candidate. Once accepted, EventSourcingDB enriches it with metadata, assigns it an identifier, and persists it immutably as an event. From that point on, the event becomes part of the system's history. For details about batching, preconditions, or cryptographic signatures, see Writing Events.
Read the Events You Have Written¶
An event store is not only about writing; it is equally about replaying history. By reading back events, you reconstruct how state evolved over time. Continuing with the library example, you can fetch the events of a single book as follows:
curl \
-i \
-X POST \
-H "authorization: Bearer secret" \
-H "content-type: application/json" \
-d "{
\"subject\": \"/books/42\",
\"options\": {
\"recursive\": false
}
}" \
http://localhost:3000/api/v1/read-events
The response is streamed in NDJSON format, meaning each event is returned as a line of JSON. This allows efficient processing of even long histories. With additional parameters you can filter ranges, reverse the order, or request all events under /. See Reading Events for a full overview.
Observe New Events in Real-Time¶
Modern systems rarely stand still. Beyond replaying the past, you often want to react the moment something new occurs – without introducing a separate outbox or polling mechanism. EventSourcingDB supports this directly via observation:
curl \
-i \
-X POST \
-H "authorization: Bearer secret" \
-H "content-type: application/json" \
-d "{
\"subject\": \"/\",
\"options\": {
\"recursive\": true
}
}" \
http://localhost:3000/api/v1/observe-events
This request first streams the existing history and then keeps the connection open, delivering new events as they arrive. Heartbeats ensure the stream remains active, and if a connection drops, it can be resumed from a known event ID. For advanced options, see Observing Events.
Why This Matters¶
With just a few steps you have touched the essence of event sourcing:
- Immutability – facts are written once and never overwritten.
- Replayability – any state can be derived from the recorded history.
- Observability – the system can react in real time as events occur.
These properties together provide a foundation for systems that are both transparent and adaptable. From here you can branch into richer domains, add projections, enforce consistency with preconditions, or extend the model with queries and signatures – all while keeping the history intact.
Where to Go Next¶
- New to the architectural basics? Explore CQRS.com.
- Interested in combining event sourcing with AI? Visit EventSourcing.ai.
- Ready to go further with EventSourcingDB? Continue in the documentation (installation, operations, queries, security, and more).
If you are still evaluating your options, take a look at the Event Store Landscape for an overview of available products.
Next up: Curious how we can support your team on its own event sourcing journey? Learn more about our Consulting and Workshops and how we help teams turn concepts into working systems.