startup exception for aspnet core worker docker container attempting to set secret from env variable in Kubernetes

I'm trying to set secrets in an aspnet core (3.0 SDK) worker application that runs in a container in Kubernetes. I am able to get the application to consume environment variable secrets when i run it using Visual Studio 2019 in a local docker container and in a container on local kubernetes instance on Windows. It is a Linux container.

However, when i deploy it to AKS, on startup the application fails to startup. It crashes with the error:

Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused process_linux.go:449: container init caused setenv: invalid argument: unknown

I'm pretty sure the secrets are deplouyed correctly as below.

I carry this out by defining my secrets with yaml similar to that below. Deployment is being carried out by Azure Devops. The __ token values are replaced out by one of the pipeline tasks. I can check the values on the AKS instance secrets page in the dashboard and can see that they have been deployed correctly.

apiVersion: v1
kind: Secret
metadata:
  name: paymentservicesettings
type: Opaque
data:
  dbconnectionstring: __dbconnectionstringbase64__
  storagebaseurl: __storagebaseurlbase64__
  storageconnectionstring: __storageconnectionstringbase64__
  baseuri: __baseuribase64__
  subscriptionkey: __subscriptionkeybase64__
  userid: __useridbase64__
  usersecret: __usersecretbase64__

where the valuesbase64 are base 64 encoded strings.

i run this via kubectl apply -f appsettings.yml command. I can see the settings are defined correct when I check them via the dashboard.

I then define the deployment like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  generation: 1
  labels:
    run: payment-service
  name: payment-service
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      run: payment-service
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        run: payment-service
    spec:
      containers:
      - image: org.azurecr.io/payment-service:__tagBuildId__    // **this tag is auto replaced for every deployment** 
        imagePullPolicy: IfNotPresent
        name: payment-service
        env:
        - name: "ASPNETCORE_ENVIRONMENT"
          value: "Kubernetes"
        - name: "dbconnectionstring"
          valueFrom:
            secretKeyRef:
              name: paymentservicesettings
              key: dbconnectionstring

        - name: "storagebaseurl"
          valueFrom:
            secretKeyRef:
              name: paymentservicesettings
              key: storagebaseurl

        - name: "storageconnectionstring"
          valueFrom:
            secretKeyRef:
              name: paymentservicesettings
              key: storageconnectionstring

        - name: "baseuri"
          valueFrom:
            secretKeyRef:
              name: paymentservicesettings
              key: baseuri

        - name: "subscriptionkey"
          valueFrom:
            secretKeyRef:
              name: paymentservicesettings
              key: subscriptionkey

        - name: "userid"
          valueFrom:
            secretKeyRef:
              name: paymentservicesettings
              key: userid

        - name: "usersecret"
          valueFrom:
            secretKeyRef:
              name: paymentservicesettings
              key: usersecret

      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      imagePullSecrets:
      - name: acr-auth

When I first deploy the service, it pulls correctly and then attempts to start up. Unfortunately it then seems that it can't find the env variables and crashes. Because the pods crash, I am unable to debug to find out why they are doing this.

I have had a look at trying to debug via Azure Dev Spaces, but it looks like this is currently unsupported for worker services.

I've tried to ensure I've got as much exception handling code as possible in the C# side to prevent crashes on startup, but this hasn't made any difference

Can anyone suggest why the pods are crashing and what I can do about it?

thanks