On this page:
The following API controllers are defined:
- OrdersController: API for retrieving one or more orders with or without processing details and notification summaries
- EmailNotificationsOrdersController: API for placing new email notification order requests
- EmailNotificationsController: API for retrieving email notifications related to a single order
- SmsNotificationsOrdersController: API for placing new sms notification order requests
- SmsNotificationsController: API for retrieving sms notifications related to a single order
The API controllers listed below are exclusively for use within in the Altinn organization:
- Metrics controller API for retrieving metrics over the use of the service
The API controllers listed below are exclusively for use within the Notification solution:
- Trigger controller: Functionality to trigger the start of order and notifications processing flows.
Data related to notification orders, notifications and recipients is persisted in a PostgreSQL database.
Each table in the notifications schema is described in the table below, followed by a diagram showing the relation between the tables.
|Contains metadata for each notification order
|Holds the static common texts related to an email notification
|Holds metadata for each email notification along with recipient contact details
|Holds the static common texts related to an sms notification
|Holds metadata for each sms notification along with recipient contact details
|Keeps track of resource limits outages for dependent systems e.g. Azure Communication services
The Notifications microservice has an integration towards a Kafka broker, and this integration is used both to publish and consume messages from topics relevant to the microservice.
The following Kafka consumers are defined:
- AltinnServiceUpdateConsumer: Consumes service updates from other Altinn services
- PastDueOrdersConsumer: Consumes notification orders that are ready to be processed for sending
- PastDueOrdersRetryConsumer: Consumes notification orders where the first attempt of processing has failed
- EmailStatusConsumer: Consumes updates on the send state of an email notification
- SmsStatusConsumer: Consumes updates on the send state of an sms notification
A single producer KafkaProducer is implemented and used by all services that publish to Kafka.
Multiple cron jobs have been set up to enable triggering of of actions in the application on a schedule.
The following cron jobs are defined:
|*/1 * * * *
|Sends request to endpoint to start processing of past due orders
|*/1 * * * *
|Sends request to endpoint to start the process of sending all new email notifications
|*/1 * * * *
|Sends request to endpoint to start the process of sending all new sms notifications
The specifications of the cron jobs are hosted in a private repository in Azure DevOps (requires login).
The microservice takes use of a range of external and Altinn services as well as .NET libraries to support the provided functionality. Find descriptions of key dependencies below.
|Apache Kafka on Confluent Cloud
|Hosts the Kafka broker
|Azure Database for PostgreSQL
|Hosts the database
|Azure API Management
|Manages access to public API
|Telemetry from the application is sent to Application Insights
|Azure Key Vault
|Safeguards secrets used by the microservice
|Azure Kubernetes Services (AKS)
|Hosts the microservice and cron jobs
|Authorizes access to the API
|Altinn Notifications Email*
|Service for sending emails related to a notification
|Altinn Notifications Sms*
|Service for sending sms related to a notification
*Functional dependency to enable the full functionality of Altinn Notifications.
Notifications microservice takes use of a range of libraries to support the provided functionality.
|Used to validate tokens in requests
|Integrate with kafka broker
|Used to validate content of API request
|Used to validate Altinn token (JWT)
|Used to validate mobile numbers
|Used to access the database server
Quality gates implemented for a project require an 80 % code coverage for the unit and integration tests combined. xUnit is the framework used and the Moq library supports mocking parts of the solution.
There are two dependencies for the integration tests:
A YAML file has been created to easily start all Kafka-related dependencies in a Docker containers.
A PostgreSQL database needs to be installed wherever the tests are running, either in a Docker container or installed on the machine and exposed on port 5432.
A bash script has been set up to easily generate all required roles and rights in the database.
See section on running the application locally if further assistance is required in running the integration tests.
The automated tests for this micro service are implemented through Grafana’s k6. The tool is specialized for load tests, but we do use it for automated API tests as well. The test set is used for both use case and regression tests.
Use case tests
Use case tests are run every 15 minutes through GitHub Actions. The tests run during the use case tests are defined in the k6 test project. The aim of the tests is to run through central functionality of the solution to ensure that it is running and available to our end users.
The regression tests are run once a week and 5 minutes after deploy to a given environment. The tests run during the regression tests are defined in the k6 test project. The aim of the regression tests is to cover as much of our functionality as possible, to ensure that a new release does not break any existing functionality.
The microservice runs in a Docker container hosted in AKS, and it is deployed as a Kubernetes deployment with autoscaling capabilities
The notifications application runs on port 5090.
See DockerFile for details.
The cron jobs run in a docker containers hosted in AKS, and is started on a schedule configured in the helm chart. There is a policy in place to ensure that there are no concurrent pods of a singular job.
The database is hosted on a PostgreSQL flexible server in Azure.
Build & deploy
- Build and Code analysis runs in a Github workflow
- Build of the image is done in an Azure Devops Pipeline
- Deploy of the image is enabled with Helm and implemented in an Azure Devops Release pipeline
- Deploy of the cron jobs is enabled with Helm and implemented in the same pipeline that deploys the web API.
- Migration scripts are copied into the Docker image of the web API when this is build
- Execution of the scripts is on startup of the application and enabled by YUNIQL
Run on local machine
Instructions on how to set up the service on local machine for development or testing is covered by the README in the repository.