Bitbucket CI/CD pipeline deployment to Azure App Service fails (.Net 6 Blazor Server)
There is a working Linux-hosted Azure App Service with a created service principal. I would like to connect it with the Bitbucket repo.
For reference I use the following script: microsoft/azure-web-apps-deploy:1.0.4
https://bitbucket.org/microsoft/azure-web-apps-deploy/src/master/README.md
Everything is working except the last step:
Status: Downloaded newer image for mspipes/azure-web-apps-deploy:1.0.4 INFO: Signing in... az login --service-principal --username $AZURE_APP_ID --password $AZURE_PASSWORD --tenant $AZURE_TENANT_ID ERROR: No subscriptions were found for '$AZURE_APP_ID'. If this is expected, use '--allow-no-subscriptions' to have tenant level accesses INFO: Starting deployment to Azure app service... az webapp deployment source config-zip --resource-group webcdlm-prod-rg-brazilsouth --name bscdlm --src web.zip ERROR: Please run 'az login' to setup the account.
Can you tell me how to pass the "use '--allow-no-subscriptions'" parameter, because I can't see any options. Or is there anything I can update in the Azure App Service or service principal?
do you know?
how many words do you know
See also questions close to this topic
-
Deploy VueJS + API app to Azure Static Web App with Gitlab doesn't create functions
I've started creating a small application that will use VueJS as a frontend with Azure Functions as the backend. I was looking at using Azure Static Web Apps to host both components for the application and Gitlab to store / deploy the app.
Everything but the creation of the Azure functions works. Following https://docs.microsoft.com/en-us/azure/static-web-apps/gitlab?tabs=vue
The output from the deploy step, listed below is:
App Directory Location: '/builds/*/valhalla/valhalla-client/dist/spa' was found. Api Directory Location: '/builds/*/valhalla/valhalla-api/dist' was found. Looking for event info Could not get event info. Proceeding Starting to build app with Oryx Azure Static Web Apps utilizes Oryx to build both static applications and Azure Functions. You can find more details on Oryx here: https://github.com/microsoft/Oryx ---Oryx build logs--- Operation performed by Microsoft Oryx, https://github.com/Microsoft/Oryx You can report issues at https://github.com/Microsoft/Oryx/issues Oryx Version: 0.2.20220131.3, Commit: ec344c058843461525ff03b46031553b6e15a47a, ReleaseTagName: 20220131.3 Build Operation ID: |qAffRWArEg8=.deee9498_ Repository Commit : 7cdd5b61f956e6cb8459b13a42af363c4440a97b Detecting platforms... Could not detect any platform in the source directory. Error: Could not detect the language from repo. ---End of Oryx build logs--- Oryx was unable to determine the build steps. Continuing assuming the assets in this folder are already built. If this is an unexpected behavior please contact support. Finished building app with Oryx Starting to build function app with Oryx ---Oryx build logs--- Operation performed by Microsoft Oryx, https://github.com/Microsoft/Oryx You can report issues at https://github.com/Microsoft/Oryx/issues Oryx Version: 0.2.20220131.3, Commit: ec344c058843461525ff03b46031553b6e15a47a, ReleaseTagName: 20220131.3 Build Operation ID: |NGXLP5bVBRk=.705477f6_ Repository Commit : 7cdd5b61f956e6cb8459b13a42af363c4440a97b Detecting platforms... Could not detect any platform in the source directory. Error: Could not detect the language from repo. ---End of Oryx build logs--- Oryx was unable to determine the build steps. Continuing assuming the assets in this folder are already built. If this is an unexpected behavior please contact support. [WARNING] The function language could not be detected. The language will be defaulted to node. Function Runtime Information. OS: linux, Functions Runtime: ~3, node version: 12 Finished building function app with Oryx Zipping Api Artifacts Done Zipping Api Artifacts Zipping App Artifacts Done Zipping App Artifacts Uploading build artifacts. Finished Upload. Polling on deployment. Status: InProgress. Time: 0.1762737(s) Status: InProgress. Time: 15.3950401(s) Status: Succeeded. Time: 30.5043965(s) Deployment Complete :) Visit your site at: https://polite-pebble-0dc00000f.1.azurestaticapps.net Thanks for using Azure Static Web Apps! Exiting Cleaning up project directory and file based variables 00:00 Job succeeded
The deploy step appears to have succeeded, and the frontend is deployed, but there are no Azure Functions showing up in this Static Web App. Is something missed here? So far, the Azure Functions I have are the boiler-plate from instantiating a new Azure Function folder.
image: node:latest variables: API_TOKEN: $DEPLOYMENT_TOKEN APP_PATH: '$CI_PROJECT_DIR/valhalla-client/dist/spa' API_PATH: '$CI_PROJECT_DIR/valhalla-api/dist' stages: - install_api - build_api - install_client - build_client - deploy install_api: stage: install_api script: - cd valhalla-api - npm ci artifacts: paths: - valhalla-api/node_modules/ cache: key: node paths: - valhalla-api/node_modules/ only: - master install_client: stage: install_client script: - cd valhalla-client - npm ci artifacts: paths: - valhalla-client/node_modules/ cache: key: node paths: - valhalla-client/node_modules/ only: - master build_api: stage: build_api dependencies: - install_api script: - cd valhalla-api - npm install -g azure-functions-core-tools@3 --unsafe-perm true - npm run build artifacts: paths: - valhalla-api/dist cache: key: build_api paths: - valhalla-api/dist only: - master needs: - job: install_api artifacts: true optional: true build_client: stage: build_client dependencies: - install_client script: - cd valhalla-client - npm i -g @quasar/cli - quasar build artifacts: paths: - valhalla-client/dist/spa cache: key: build_client paths: - valhalla-client/dist/spa only: - master needs: - job: install_client artifacts: true optional: true deploy: stage: deploy dependencies: - build_api - build_client image: registry.gitlab.com/static-web-apps/azure-static-web-apps-deploy script: - echo "App deployed successfully." only: - master
-
Azure Synapse Notebooks Vs Azure Databricks notebooks
I was going through the features of Azure Synapse Notebooks Vs Azure Databricks notebooks.
- Are there any major differences between these apart from the component they belong to ?
- Are there any scenarios where one is more appropriate over other?
-
How to authorize azure container registry requests from .NET CORE C#
I have a web application which creates ContainerInstances, I have specific container registry images I want to use. As a result, I use this code to get my azure container registry
IAzure azure = Azure.Authenticate($"{applicationDirectory}/Resources/my.azureauth").WithDefaultSubscription(); IRegistry azureRegistry = azure.ContainerRegistries.GetByResourceGroup("testResourceGroup", "testContainerRegistryName");
I get this error when the second line of code is hit
The client 'bc8fd78c-2b1b-4596-827e-6a3c918b7c17' with object id 'bc8fd78c-2b1b-4596-827e-6a3c918b7c17' does not have authorization to perform action 'Microsoft.ContainerRegistry/registries/read' over scope '/subscriptions/506b787d-83ef-426a-b7b8-7bfcdd475855/resourceGroups/testapp-live/providers/Microsoft.ContainerRegistry/registries/testapp' or the scope is invalid. If access was recently granted, please refresh your credentials.
I literally have no idea what to do about this. I have seen so many articles talking about Azure AD and giving user roles and stuff. Can someone please walk me step by step how to fix this? I REALLY appreciate the help. Thanks.
I cannot find any client under that object ID so perfectly fine starting from scratch again with a better understanding of what I am doing.
-
Conditional KinesisStreamSpecification in CloudFormation script
I am new to CloudFoundation scripts and trying to set the conditional attribute for AWS DDB table using the yaml files.
Tried with below but getting error during the stack formation - Property StreamArn cannot be empty.
Seems its not allowing AWS::NoValue in this case.
Can we set the 'KinesisStreamSpecification' property itself on the condition?
KinesisStreamSpecification: StreamArn: !If - ShouldAttachKinesis - !Sub "arn:aws:kinesis:SomeValue" - !Ref "AWS::NoValue"
-
OpenCV cannot read manually edited YAML parameters
I manually add a custom matrix to a YAML file for openCV parameters, the problem is it cannot read the matrix and so OpenCV returns none-type. I do not know what is happening here as I tried editing both in notepad and visual studio code.
%YAML:1.0 --- test_matrix: !!opencv-matrix rows: 2 cols: 2 dt: i data: [ 1, 1, 1, 1 ]
-
In a json embedded YAML file - replace only json values using Python
I have a YAML file as follows:
api: v1 hostname: abc metadata: name: test annotations: { "ip" : "1.1.1.1", "login" : "fad-login", "vip" : "1.1.1.1", "interface" : "port1", "port" : "443" }
I am trying to read this data from a file, only replace the values of
ip
andvip
and write it back to the file.What I tried is:
open ("test.yaml", w) as f: yaml.dump(object, f) #this does not help me since it converts the entire file to YAML
also
json.dump()
does not work too as it converts entire file to JSON. It needs to be the same format but the values need to be updated. How can I do so? -
Is it possible to only allow some branches to trigger Github actions?
I was wondering if there were a way to restrict the branches that can trigger an action. Here is my use case.
I have a repository that have a workflow. This workflow deploys the code to my prod when there's a push on master only with the deploy_to_prod action. Someone decides to create a develop branch. He pushes on this branch a modification of the workflow to trigger the workflow anytime someone pushes to master or develop. It means he is now able to push develop on the prod environment without restriction and thus a branch protection on master is useless.
Am I missing something ? Do you have some mitigations policy to avoid this situation ? I have thought about restricting the branches that could trigger a workflow but I am not sure it would be sufficient and or possible.
- Can we protect the modification of workflows without having been merged to master for example ?
- Could we add a notification policy when those files are changed ?
- Could we restrict some runners to some branches (runners that have specific rights) ?
Thanks
-
Is it possible to enforce via CI that module_a does not import anything from module_b?
I'm maintaining several open source projects and I want to write code at work that nudges people to do the right thing.
I have a situation where I see people importing stuff in
module_a
frommodule_b
, but that should not happen. There are two reasons for it:- Production code importing stuff from test code: I hope I don't need to explain why that is a bad idea.
- Import Cycles: Some modules are so basic, that they should not import any other modules from the package (e.g.
constants.py
/errors.py
/utils.py
).
For this question, you can assume that all imports happen on module level (hence not inside a function).
Is it possible to enforce via CI (e.g. mypy / pytest / flake8) that
module_a
does not import anything frommodule_b
? -
How to run CLI migrations in a Continous Integration pipeline on a private database on AWS RDS
I am currently using a tool that allows you to apply database migrations only using a CLI (Prisma). My database is in a private network in AWS.
To do it manually, I currently do this:
ssh -i $SSH_PATH_TO__MY_IDENTITY_FILE ec2-user@${BASTION_HOSTNAME} \ -N -f -L $DB_PORT:${DB_HOSTNAME}:5432 &
A bastion, in AWS parlance, is just a VM that has public access but also can reach private networks. This
ssh
command creates a tunnel through the bastion so that I can reach the private machine in my local$DB_PORT
. Then, I apply the migrations locally but, since the database is listening on a local port, I can reach my production database.Here is the question: how do I move this to a CI/CD pipeline?
I was thinking about doing this
Use a docker image that has
ssh
andnodejs
installed,Move the identity file to a env variable in the CI/CD.
Install the migrations tool there.
Create a tunnel as I did above.
Modify the configuration file to point to the production database.
And then, finally, apply the migrations.
I think this could work, but it seems a lot of trouble and I was wondering that maybe there was a better, standard way to approach this. Maybe triggering a Lambda function that runs inside the private network?
-
Web API with Microsoft Identity Platform Authentication
I created a new ASP.NET Core Web API with Authentication type "Microsoft identity platform" based on the VS2022 template.
On Azure I setup the following with my trial subscription (where I am global administrator):
- Create app API
- Create app registration
To doublecheck if the app is running, I also added a TestController, which returns a simple string.
After setting up azure, I changed the appsettings.json file accordingly:
"AzureAd": { "Instance": "https://login.microsoftonline.com/", "Domain": "[xxx]", "TenantId": "[xxx]", "ClientId": "[xxx]", "Scopes": "access_as_user", "CallbackPath": "/signin-oidc" },
Everything else is setup nicely already in the program.cs (I only extracted relevant code for you):
builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApi(builder.Configuration.GetSection("AzureAd")); app.UseHttpsRedirection(); app.UseAuthentication(); app.UseAuthorization();
After that I setup a YAML Pipeline on Azure Devops to deploy the API directly to the cloud.
The deployment seems to work: I can access the function of the TestController. However, when I try to access the weatherforecast I receive the Status Code (I try this with the global admin user):
401 Unauthorized
What have I tried so far?
- I added user to Access Control (IAM) Contributor (For testing) both of the subscription and the app service itself (i.e. the App Api)
Note, the WeatherForecastController of the template looks like the following:
[ApiController] [Route("[controller]")] [RequiredScope(RequiredScopesConfigurationKey = "AzureAd:Scopes")] public class WeatherForecastController : ControllerBase { private static readonly string[] Summaries = new[] { "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching" }; private readonly ILogger<WeatherForecastController> _logger; public WeatherForecastController(ILogger<WeatherForecastController> logger) { _logger = logger; } [HttpGet(Name = "GetWeatherForecast")] public IEnumerable<WeatherForecast> Get() { return Enumerable.Range(1, 5).Select(index => new WeatherForecast { Date = DateTime.Now.AddDays(index), TemperatureC = Random.Shared.Next(-20, 55), Summary = Summaries[Random.Shared.Next(Summaries.Length)] }) .ToArray(); } }
Any ideas of what I am missing here or what I could check on Azure to narrow down the problem?
- Health Check on Web App Service vs Availability on Application Insights
-
How can I install my own Python package in Azure Web App Service Linux environment?
I deployed my Python Django project to Azure Web Service Linux environment, and found an error most likely caused by the version of Azure's own Python version (see this post Azure Text to Speech Error: 0x38 (SPXERR_AUDIO_SYS_LIBRARY_NOT_FOUND) when deploying to Azure App Services Linux environment ). However, I cannot seem to find a way to upload my own Python package (3.10) to Azure Web Service Kudu site. Is there a way to do it?
Thanks.
-
Bitbucket Pipelines hangs when testing a Nuxt app with Cypress
I have a Nuxt app that I want to test with Cypress in CI. I've seen in the Cypress documentation that you have to install some third-party package to wait for the server to start and then run your tests. I then installed the wait-on package and created these scripts in package.json.
package.json
"scripts": { "start:wait": "yarn start & wait-on http://localhost:3000", "run:cypress": "cypress run" },
For the CI, I install the dependencies, run
nuxt generate
to bundle the app and then test using the step below.bitbucket-pipelines.yml
- step: &e2e-test image: cypress/included:9.4.1 name: Run application E2E tests caches: - cypress script: - yarn start:wait - yarn record:cypress -- --config video=true --parallel --ci-build-id $BITBUCKET_BUILD_NUMBER artifacts: # store any generates images and videos as artifacts - cypress/screenshots/** - cypress/videos/** # A little bit below - parallel: - step: *e2e-test - step: *e2e-test - step: *e2e-test
It works when I test it locally but in CI, Bitbucket hangs and nothing happens.
I've also seen the start-server-and-test package but I don't know how to pass extra arguments like (
--config video=true --parallel --ci-build-id $BITBUCKET_BUILD_NUMBER
) torun:cypress
from the CI."ci": "start-server-and-test 'yarn start' http://localhost:3000 'run-cypress <extra-arguments-here?>'"
Like this but with arguments from the CI to retrieve
$BITBUCKET_BUILD_NUMBER
-
Jest test stuck on bitbucket pipelines without any error
We use Bitbucket pipelines in our CI for testing, Our application is NestJS with Typescript tested with Jest.
We always got all tests running, however few days from now (2022 may) the tests are stuck after some suit, the suite where the test stuck is quite random.
The tests dont fail, we dont have any memory warning or anything else, it just is stucked on the pipeline. We need to stop the pipeline because it never stop.
Unfourtunately we dont any error for furgher investigation.
What could we do to inspect more details?
-
Is there a way to reproduce a flaky test in jest?
I work on a React application with a Bitbucket pipeline that runs its tests (with jest). There's a test that sometimes fails, sometimes doesn't and I want to fix it but I can't really check if my solutions work. Does someone know a way to reproduce a flaky test failure?