- The bus which handles the delivery of messages between queues
- The API which provides an interface to read from and write to the queues
### Data flow
- Client A sends to `client-a-outbox` (or POSTs to API /send - not yet implemented)
- Client A sends to `client-a-outbox` (by POSTing to API /send)
- Messages are forwarded from `client-a-outbox` to `soar-publish`
- Messages are published from `soar-publish` with the given topic read from the message
- Subscribers listen to for messages
- Subscription is delivered to `client-b-inbox`
- Client B reads from `client-b-inbox (or GETs from API /receive)
There is a parallel flow when a client sends to `client-a-notify` in which case the
messages are delivered through the broadcast exchange to all clients `client-x-inbox`.
- Subscription is delivered to `client-b-inbox` (if the client subscription matches the message topic)
- Client B reads from `client-b-inbox (by GETting from API /receive)
### Auth placeholder
There is a parallel flow when a client sends to `client-a-broadcast` (by POSTing to /notify).
In this case the messages are delivered through the broadcast exchange to all clients `client-x-inbox`.
As a proxy for proper authentication, when you post a client a random secret is
returned in the response. To send to / receive from the bus you then call the API
with the client_id and secret and it checks they match. The client_id determines
which queues it reads from.
### Prerequisites
Subsequent requests to the client endpoint return the client_id but not the secret.
- Python >= 3.8
- A virtual environment manager - virtualenv, venv or pipenv
- Docker and docker-compose
### Testing
Current coverage:
- API: yes
- Pika RabbitMQ implementation: no
### Running via docker-compose
```
pytest
run-compose.sh
```
### Running via docker-compose
Using `docker-compose` will mean that everything is setup automatically, this includes the `rabbitmq` container, the backbone API, and the backbone bus. The `run-compose.sh` script has been provided to simplify this even further - all you have to do is set whatever env vars you need in the `.env` file and then run `./run-compose.sh` (the defaults in `.env` are fine for local dev work, but ones not labelled `optional` will need setting in a production setting). The env vars are:
- `DATA_DIR` - Where to mount the volume of the API container on your local system. This defaults to the result of `pwd`, which should be within the `communications-backbone` repo
- `SOAR_TOKEN_LIFETIME` (Optional) - The number of hours until a newly created token expires
- `SOAR_TOKEN_SECRET` (Optional) - A secret key used to encrypt/decrypt token data. If specified the value should be set using TokenModel.getKey()
### Running manually
### Running the bus and API natively (without docker)
#### Setup
In a virtual environment
We recommend using some form of python virtual environment to maintain a consistent
python version and ring-fence the package management.
In your virtual environment:
```
pip install -r requirements-dev.txt
```
This installs both the development and runtime dependencies for the project.