:
Last modified: Sep 18, 2023

What's new

Overview of changes introduced in v2 of deployment.

On this page:

2.8.0

  • Add annotation config.linkerd.io/skip-outbound-ports: “443”
  • Add port 80 to l5d-dst-override url in middleware

2.7.0

  • Update timeoutSeconds: 30 readiness and liveliness probe

2.6.0

  • Add option for startupprobe

2.5.0

Changes introduced

  • Raise value resource request memory to 256Mi

2.4.0

Changes introduced

  • Remove hard values linkerd resources

  • Add help text for extra annotations

  • Add if linkerd add annotation cluster-autoscaler.kubernetes.io/safe-to-evict: true

2.3.0

Changes introduced

  • Upgraded HorizontalPodAutoscaler from version autoscaling/v2beta2 to autoscaling/v2

autoscaling/v2beta2 is deprecated in version 1.23+ of kubernetes and removed in version 1.26+ Apps clusters will eventually be upgraded and deployment of apps using old chart versions will fail once cluster is upgraded to version 1.26+.

AKS release calendar

View release on Github

2.2.0

Changes introduced

  • Make it possible to at custom annotations to pods in values.yaml

How to add pod annotations in values.yaml

deployment:
  podAnnotations:
    key1: value1
    key2: value2

View release on Github

2.1.0

Enabling of liveliness and/or readiness probe requires that your application is running version 4.30.0 or higher of the Altinn.App.* nuget packages.

Changes introduced

  • Default CPU og memory requested per pod is reduced to 50m and 128Mi, respectively.
  • Configurable liveliness and readiness probes are available. Default behavior is that this is disabled.

New optional fields with default values available in values.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
deployment:
  readiness:
    enabled: false
    path: /health
    initialDelaySeconds: 30
    failureThreshold: 3
    periodSeconds: 3
    timeoutSeconds: 1
  liveness:
    enabled: false
    path: /health
    initialDelaySeconds: 3
    failureThreshold: 3
    periodSeconds: 10

Walk-through:

3. Enable or disable readiness probe for this application.

4. The path to the liveliness endpoint in the application.

5. Number of seconds after the container has started before readiness probes are initiated.

6. Minimum consecutive failures for the probe to be considered failed after having succeeded.

7. How often (in seconds) to perform the probe

8. Number of seconds after which the probe times out.

10. Enable or disable liveliness probe for this application.

11. The path to the liveliness endpoint in the application.

12. Number of seconds after the container has started before liveliness probes are initiated.

13. Minimum consecutive failures for the probe to be considered failed after having succeeded.

14. How often (in seconds) to perform the probe

View release on Github

2.0.0

Upgrading to 2.0.0 from 1.x.x leads to a short downtime during deployment. Subsequent deployments will run as normal.

If your apps deployment folder contains the templates folder please follow the migration guide.

Changes introduced

  • Deployment renamed to -v2 due to changes in selector fields (field is immutable) WARNING leads to downtime during first deploy
  • Add resource requests to all deployments
  • Horizontal pod autoscaler enabled by default for all deployments (automatic scaling of application)
  • Labels and selectors updated for most kubernetes objects
  • Default initial replicaCount changed from 1 to 2

New optional fields with default values available in values.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
deployment:
  autoscaling:
    enabled: true
    replicas:
      min: 2
      max: 10
    avgCpuUtilization: 70
    behavior:
      stabilizationWindowSeconds:
        scaleUp: 0
        scaleDown: 120
  resources:
    requests:
      cpu: 300m
      memory: 256Mi

Walkthrough

3. Enable or disable autoscaling for this application

5. The lower limit for the number of pods that can be set by the autoscaler.

6. The upper limit for the number of pods that can be set by the autoscaler.

7. The target average CPU utilization (represented as a percent of requested CPU) over all the pods for when scaling should occur.

9. The stabilization window is used to restrict the flapping of replicas when the metrics used for scaling keep fluctuating.

10. Number of seconds the average CPU utilization for all pods are above the threshold (avgCpuUtilization) before scaleUp starts.

11. Number of seconds the average CPU utilization for all pods are below the threshold (avgCpuUtilization) before scaleDown starts.

14. CPU millicores reserved by the kubelet for each pod of this application. Used by HPA to calculate scale. Pods are allowed to consume more than this if it’s available.

15. Memory reserved by the kubelet for each pod of this application. Pods are allowed to consume more than this if it’s available

View release on Github

New optional field without default values available in values.yaml

1
2
3
4
5
deployment:
  resources:
    limits:
      cpu: 1000m
      memoty: 512Mi

Walkthrough

4. Upper limit of CPU millicores a pod is allowed to consume. Pods hitting the limit will be throttled

5. Upper limit of memory a pod is allowed to consume. Pods exceeding this limit will be terminated by the system with an out of memory (OOM) error.

Pull requests merged