Skip to content

Commit

Permalink
Update examples using Docker Compose (#4857)
Browse files Browse the repository at this point in the history
* Update docker-compose example README

- Update to Docker Compose v2 commands.
- Replace `docker system prune` with `docker compose down`.

* Update Event-driven architecture example README file

* Rename Docker Compose file

Make docker-compose.yml name consistent between examples
  • Loading branch information
dreglad authored Jul 11, 2024
1 parent 86c9390 commit a160baa
Show file tree
Hide file tree
Showing 3 changed files with 52 additions and 45 deletions.
25 changes: 11 additions & 14 deletions examples/docker-compose/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ and related configuration.

## Prerequisites

Install *Docker* and *Docker Compose*. You can either build *Apicurio Registry* images locally
Install *Docker*. You can either build *Apicurio Registry* images locally
(see the build documentation in the project root),
or use the pre-built images from a public registry.

Expand All @@ -22,36 +22,34 @@ and copy the content of the `./config` directory to the volume - `docker volume
### Metrics with Prometheus and Grafana

Run `compose-metrics.yaml` together with a base compose file, e.g.
`docker-compose -f compose-metrics.yaml -f compose-base-sql.yaml up --abort-on-container-exit`.
`docker compose -f compose-metrics.yaml -f compose-base-sql.yaml up --abort-on-container-exit`.

*Grafana* console should be available at `http://localhost:3000` after logging in as *admin/password*.



### Docker-compose and Quarkus based installation
### Docker Compose and Quarkus based installation

#### Overview

This setup contains a fully configured Apicurio Registry package already integrated with Keycloak. Currently every application is routed to the host network without SSL support. This is a development version, do not use it in a production environment!

Here is the port mapping:

- 8080 for Keycloak
- 8081 for the Registry API
- 8888 for the Registry UI


#### Starting the environment

You can start the whole stack with these commands:
You can start the whole stack with this command:

```
docker-compose -f docker-compose.apicurio.yml up
```console
docker compose -f docker-compose.apicurio.yml up
```

To clear the environment, please run these commands:
To clear the environment, please run this command:

```
docker system prune --volumes
```console
docker compose -f docker-compose.apicurio.yml down --volumes
```

#### Configure users in Keycloak
Expand All @@ -63,11 +61,10 @@ At the first start there are no default users added to Keycloak. Please navigate

The default credentials for Keycloak are: `admin` and the password is also `admin`.

Select Registry realm and add a user to it. You'll need to also assign the appropriated role.
Select `Registry` realm and add a user to it. You'll need to also assign the appropriate role.

#### Login to Apicurio and Keycloak

Apicurio Registry UI URL: `http://YOUR_IP:8888`
Apicurio Registry API URL: `http://YOUR_IP:8081`
Keycloak URL: `http://YOUR_IP:8080`

72 changes: 41 additions & 31 deletions examples/event-driven-architecture/README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,30 @@
# Kafka, ksqldb, Kafka-ui, apicurio-registry and Debezium together
# Event-driven architecture with Apicurio Registry

This tutorial demonstrates how to use [Debezium](https://debezium.io/) to monitor the PostgreSQL database used by Apicurio Registry. As the
data in the database changes, by adding e.g. new schemas, you will see the resulting event streams.
## Kafka, ksqldb, Kafka-ui, apicurio-registry and Debezium together

This tutorial demonstrates how to use [Debezium](https://debezium.io/)
to monitor the PostgreSQL database used by Apicurio Registry.
As the data in the database changes, such as by adding new schemas,
you will see the resulting event streams.

## Avro serialization

The [Apicurio Registry](https://github.com/Apicurio/apicurio-registry) open-source project provides several
components that work with Avro:
The [Apicurio Registry](https://github.com/Apicurio/apicurio-registry)
open-source project provides several components that work with Avro:

- An Avro converter that you can specify in Debezium connector configurations. This converter maps Kafka
Connect schemas to Avro schemas. The converter then uses the Avro schemas to serialize the record keys and
values into Avro’s compact binary form.
- An Avro converter that you can specify in Debezium connector configurations
This converter maps Kafka Connect schemas to Avro schemas.
The converter then uses the Avro schemas to serialize the record keys and values
into Avro’s compact binary form.

- An API and schema registry that tracks:

- Avro schemas that are used in Kafka topics
- Where the Avro converter sends the generated Avro schemas
- Avro schemas that are used in Kafka topics
- Where the Avro converter sends the generated Avro schemas

### Prerequisites

- Docker and is installed and running.
- Docker is installed and running.

This tutorial uses Docker and the Linux container images to run the required services. You should use the
latest version of Docker. For more information, see
Expand All @@ -29,20 +34,20 @@ components that work with Avro:

1. Clone this repository:

```bash
git clone https://github.com/Apicurio/apicurio-registry-examples.git
```console
git clone https://github.com/Apicurio/apicurio-registry.git
```

1. Change to the following directory:

```bash
cd event-driven-architecture
```console
cd examples/event-driven-architecture
```

1. Start the environment

```bash
docker-compose up -d
```consolee
docker compose up -d
```

The last command will start the following components:
Expand All @@ -56,14 +61,15 @@ The last command will start the following components:

## Apicurio converters

Configuring Avro at the Debezium Connector involves specifying the converter and schema registry as a part of
the connectors configuration. The connector configuration file configures the connector but explicitly sets
the (de-)serializers for the connector to use Avro and specifies the location of the Apicurio registry.
Configuring Avro at the Debezium Connector involves specifying the converter
and schema registry as a part of the connector's configuration.
The configuration file sets the (de-)serializers to use Avro
and specifies the location of the Apicurio registry.

> The container image used in this environment includes all the required libraries to access the connectors and converters.
> The container image used in this environment includes all the required libraries to access the connectors and converters.

The following are the lines required to set the **key** and **value** converters and their respective registry
configuration:
The following lines are required to set the **key** and **value** converters
and their respective registry configurations:

```json
{
Expand All @@ -81,20 +87,24 @@ configuration:
### Create the connector

Let's create the Debezium connector to start capturing the changes of the database.
Let's create the Debezium connector to start capturing changes in the database.

1. Create the connector using the REST API. You can execute this step either by using the curl command below
or by creating the connector from the Kafka UI.
1. Create the connector using the REST API.
You can execute this ste either by using the `curl` command below
or by creating the connector from the Kafka UI.

```bash
curl -X POST http://localhost:8083/connectors -H 'content-type:application/json' -d @studio-connector.json
```console
curl http://localhost:8083/connectors -H 'Content-Type: application/json' -d @studio-connector.json
```

### Check the data

The previous step created and started the connector. Now, all the data inserted in the Apicurio Registry database will be captured by Debezium
and sent as events into Kafka.
The previous step created and started the connector.
Now, all the data inserted in the Apicurio Registry database
will be captured by Debezium and sent as events into Kafka.

## Summary

By using this example you can test how to start a full even driven architecture, but it's up to you how to use the produced events in e.g. ksqldb to create streams/tables etc.
This example allows you to test how to start a full event-driven architecture.
How you use the produced events is up to you,
such as creating streams or tables in ksqlDB, etc.

0 comments on commit a160baa

Please sign in to comment.