gitlab-runner clone code failed, The requested URL returned error: 500
Alibaba Cloud took a gitlab server, did not resolve the domain name, and bound the hosts 127.0.0.1 git.server. Another machine had gitlab-runner to build the code, and the host on the machine also bound 116...194 git.server, there is no problem with using ssh to clone code normally.
When building with runner, the following error occurs: fatal: unable to access 'http://gitlab-ci-token:firstname.lastname@example.org/tuitui/test-ci.git/': The requested URL returned error: 500
/etc/hosts 116.*.*.194 git.server [[runners]] name = "api-tuitui" url = "http://git.server/" token = "2dbcbaba3*********e3499064" executor = "shell" [runners.custom_build_dir] [runners.cache] [runners.cache.s3] [runners.cache.gcs] Grant root user ：gitlab-runner install --user=root --working-directory=/var/www
Running with gitlab-runner 11.4.2 (cf91d5e1) on admin 83a7c629 Using Shell executor... Running on iZwz98jvb8bcz3jj1i5x2mZ... Cloning repository... Cloning into '/home/gitlab-runner/builds/83a7c629/0/tuitui/test-ci'... fatal: unable to access 'http://gitlab-ci-token:email@example.com/tuitui/test-ci.git/': The requested URL returned error: 500 bash: line 61: cd: /home/gitlab-runner/builds/83a7c629/0/tuitui/test-ci: No such file or directory ERROR: Job failed: exit status 1
See also questions close to this topic
How to permanently undo recent commits from git remote but keep it in my local?
I made a git repo (only a master branch) with one remote and one local. There are no other users who have cloned it but the remote path can be cloned by a few users.
My local clone is at commit #17 and I have pushed up til commit #12 to the remote. I’ve come to realize that every commit after #6 should not be shared and the remote needs to (for now) remain at #6.
I don’t want to lose all the commits 1-17 and the history but in order to reset the remote my understanding is I have to first reset local to #6 and push -f that. Is it possible for me to reset the remote to #6 while locally remaining ahead at 17 so that if someone clones the remote they can’t see the vulnerable commits?
My idea is that I would need to clone my local to a different local first so that the second local keeps all 17 commits and history before executing the reset followed by the push -f. Is this how one would approach this situation?
What does the ~ mean after the branch of a git rebase --onto?
In the example that the Git docs give for
git rebase --ontoits not clear about what the ~ means
A range of commits could also be removed with rebase. If we have the following situation:
enter code here E---F---G---H---I---J topicA
then the command
git rebase --onto topicA~5 topicA~3 topicA
would result in the removal of commits F and G:
This is useful if F and G were flawed in some way, or should not be part of topicA. Note that the argument to --onto and the parameter can be any valid commit-ish.
topicA~5mean 5 commits from the head of
topicA? (So counting backwards?)
I cant think of anything else that it would mean but I want to be sure before i try it on my repo.
- Confusing git conflicts
SSH access to github repo on codeship
I am attempting to push to github from a container on Codeship. After getting a
Permission denied (publickey)error, I followed the suggestion here:
I created a service called
publishto and some steps to try to recreate the article's suggestion.
My codeship_services.yml file:
# codeship_services.yml publish: build: image: codeship/setting-ssh-key-test dockerfile: Dockerfile.publish encrypted_env_file: codeship.env.encrypted volumes: - ./.ssh:/root/.ssh
My codeship_steps.yml file:
- name: temp publish service service: publish command: /bin/bash -c "echo -e $PRIVATE_SSH_KEY >> /root/.ssh/id_rsa" - name: chmod id_rsa service: publish command: chmod 600 /root/.ssh/id_rsa - name: add server to list of known hosts service: publish command: /bin/bash -c "ssh-keyscan -H github.com >> /root/.ssh/known_hosts" - name: confirm ssh connection to server, authenticating with generated public ssh key service: publish command: /bin/bash -c "ssh -T firstname.lastname@example.org"
jet steps, however, I still get the
Permission denied (publickey)error:
(step: temp_publish_service) success ✔ (step: chmod_id_rsa) (step: chmod_id_rsa) success ✔ (step: add_server_to_list_of_known_hosts) (service: publish) (step: add_server_to_list_of_known_hosts) # github.com:22 SSH-2.0-babeld-80573d3e (service: publish) (step: add_server_to_list_of_known_hosts) # github.com:22 SSH-2.0-babeld-80573d3e (service: publish) (step: add_server_to_list_of_known_hosts) # github.com:22 SSH-2.0-babeld-80573d3e (step: add_server_to_list_of_known_hosts) success ✔ (step: confirm_ssh_connection_to_server,_authenticating_with_generated_public_ssh_key) (service: publish) (step: confirm_ssh_connection_to_server,_authenticating_with_generated_public_ssh_key) Permission denied (publickey). (step: confirm_ssh_connection_to_server,_authenticating_with_generated_public_ssh_key) error ✗ (step: confirm_ssh_connection_to_server,_authenticating_with_generated_public_ssh_key) container exited with a 255 code
I have generated the keys as instructed in the article and added the encrypted private key to
Is there something I am missing?
sshpass from iplist and run tcptraceroute
Number of servers in a list = 40
I want to run tcptraceroute on each server for all 40 ips in the list. for this i need to loop sshpass and run tcptracetraceroute.
When i run the below code, it just runs for first ip in list and use same ip to run tcptraceroute and exits.
IFS=$IFS, USER='*********' PASSWORD='********' PORT='22' while read ip; do sshpass -p $PASSWORD ssh -i turbot -t -o StrictHostKeyChecking=no $USER@$ip "sudo -s /usr/bin/tcptraceroute "$ip" $PORT" >>Res.txt done < .PrivateIP-List.txt
I'm trying to output the traceroute results to Res.txt file
How to specify remote shell in fabric 2.4
I need to connect via ssh to a remote host from python. I've chosen fabric 2.4 because it can run multiple commands in the single ssh-session. But I need to use a remote shell different from sh/bash/etc, my shell is powered by clixon.
All examples I've found described changing shell in fabric 1.X.
How can I configure it in fabric 2.4?
Or maybe you can advice another ssh library for python that can run multiple commands in single ssh-session?
P.S. I can't change the default shell for user in
git commits heatmap for a newly created project from an existing one
I pushed an existing git project to Gitlab and wanted to have an intuitive impression of git commits history through the Gitlab heatmap. Yet from the heatmap, I could only tell that I pushed to this repo that day and all the previous git commit history wouldn't reflect in the heatmap.
Is there a way to show my previous git commits in the heatmap? Or is the heatmap only to reflect the actions caused by
gitlab server timeout after configuring custom port
I have successfully installed gitlab and run it with embedded nginx. I have also successfully activated https. But when I try to change the external Url port (in both modes http and https) I get a time out error in my browser.
For information, I am running gitlab on a remote Ubuntu 18.04.2 server available from a registered domain and browsing with both Chrome and Safari from my computer from home.
I have reconfigured gitlab already a hundred times. I have checked the different involved ports to make sure they are allowed by firewall and listened to by the expected processes. Everything looks fine to me.
In /etc/gitlab/gitlab.rb I have done what described on the gitlab website for custom ports:
external_url 'http://gitlab.mydomain.com:10443' ... nginx['enable'] = true
When on standard port 80 and 443 I get the access logs in /var/log/gitlab/nginx/gitlab_access.log. But as soon as I have added the custom port I don't see anything anymore.
Result of the 'ufw status verbose' cmd:
Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), disabled (routed) New profiles: skip To Action From -- ------ ---- ... 8080/tcp ALLOW IN Anywhere 10443/tcp ALLOW IN Anywhere ... 8080/tcp (v6) ALLOW IN Anywhere (v6) 10443/tcp (v6) ALLOW IN Anywhere (v6)
Result of the 'netstat -tulpn' cmd:
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 14865/unicorn maste tcp 0 0 0.0.0.0:10443 0.0.0.0:* LISTEN 21422/nginx: master tcp 0 0 0.0.0.0:8060 0.0.0.0:* LISTEN 21422/nginx: master
What is even weird is that when I perform cmd 'nc -vz gitlab.mydomain.com 10443', it succeeds:
Connection to gitlab.mydomain.com 10443 port [tcp/*] succeeded!
The request to http://gitlab.mydomain.com:10443 (or its SSL version when activated) ends up with a ERR_CONNECTION_TIMED_OUT on Chrome and 'Safari can't open the page "gitlab.mydomain.com:10443" because the server where this page is located isn't responding' on Safari.
minimize git diff by undoing trivial changes
When working with code (mostly c++ in my case) and specifically with git and gitlab, I often find myself working on a specific merge request and feature addition for several weeks. At the end, I arrive with a very long merge request that is very hard for the maintainers to understand, because I have committed a lot of changes.
Some of these changes are intentional and important to the feature at hand, others are trivial like fixing the indentation of a certain section of code, which I often to to improve readability while I'm debugging. However, in order for the MR to be as small as readable as possible, I'd like to "undo" all the trivial changes not affecting the code itself (but only the layout) before removing the WIP label from my MR. So I sometimes find myself going through my MR and undoing all those prettifications by hand in order to make the MR more readable for the reviewers.
This is a lot of stupid work that could be spent better elswhere.
Is there a script or mechanism that I can use (specifically on c++ code) to go through the code and undo all trivial changes (for example, whitespace changes) with respect to a certain commit? This would simplify my life significantly. I could see myself writing a script for this, but I'm hoping for there to be some git magic that I can use, or for someone else already having solved this issue for me. Any suggestions?
How do I add a timer to my gitlab-ci job to make it stop within 1 minute in gitlab-ci.yml
I have a spring boot application and I am configuring gitlab-ci.yml to run my integration tests. In order to achieve that I need my spring boot app running.
I am able to bring up my app using spring-boot-maven-plugin but the job I configured does not complete since the app is running and times out at 1 hour.
Is there a way to end a gitlab-ci job after a minute or two?
here is my gitlab-ci.yml config
server_start: stage: test script: - mvn spring-boot:start
Are there other examples of how to configure the gitlab-ci.yml to check if the server is up?
GitLab CI get last artifact
I'm trying to get the latest build artifact using
curl. Here's what I've tried.
First, get last pipeline id:
curl -v -H "Content-Type: application/json" -H "PRIVATE-TOKEN: <my-token-here>" https://<project>/api/v4/projects/<project>/pipelines?per_page=1&page=1
Next, get job id based on pipeline id just obtained before:
curl -sS --header "PRIVATE-TOKEN: <my-token-here>" "https://[redacted,host]/api/v4/projects/[redacted,project]/pipelines/<pipeline-id>/jobs" | jq '. | select(.name == "build-assets" and .status == "success" and .artifacts_file != null) | .id'
Finally, get the artifacts to
build.zipbased on job id:
curl -sS --header "PRIVATE-TOKEN: <my-token-here>" "https://[redacted,host]/api/v4/projects/[redacted, project]/jobs/<JOB_ID>/artifacts" > build.zip
These steps above do work, but I have to hit three endpoints (and process the JSON response for each step).
I also read in GitLab's documentation, that there's a single endpoint available for this. So I also tried this:
curl -sS --header "PRIVATE-TOKEN: <my-token-here>" "https://<url>/<namespace>/<project>/-/jobs/artifacts/<refs>/download?job=<job_name>"
but this always redirects me to the login page, saying this:
<html><body>You are being <a href="https://<url>/users/sign_in">redirected</a>.</body></html>
Is there any simpler way to do this task? Or how to use the endpoint that is described on the documentation above propeprly?
Automatic version increase in Cargo.toml for CI purposes
As part of my software's CI process, I also create Debian packets that I put into a staging repository, so that the software can be accessed on the ultimately intended fashion on testing systems. In order to create the DEB files, the
Cargo-debcrate is used, my (Gitlab) CI runner managing the staging repo does so using
The issue I am running into is the Debian package version.
versionproperty specified in
Cargo.tomlfor the created package meta information, whose semantic version triple should not change for many staging builds. At the moment, this forces me to manually adjust the version string to something like "X.Y.Z-preV" before every commit. If I ever forget to bump the "V" part of the version here, my pipeline will fail, as reprepro complains that it got the same version twice (assuming the rest of the build succeeded).
I could of course write a Shell script that in some way increments this value by parsing the
Cargo.tomlfile and rewriting the version line, but I keep wondering if there is a more elegant way to do this, perhaps utilizing some more obscure flags from
Execute external bash script inside GitLab-ci Docker build
I would like to execute an external (on the local machine) bash script from gitlab-ci.yml which uses the docker:stable image. I would like to execute startup.sh located outside the gitlab docker image. Is this possible or are there better options?
image: docker:stable #Build script variables: CI_DEBUG_TRACE: "true" DOCKER_DRIVER: overlay before_script: - docker --version build: services: - docker:dind script: - docker build --no-cache -t <tag> . - docker login -u root -p <pass> <registry> - docker tag ... - docker push ... - echo "build completed" stage: build tags: - <tag> deploy_staging: stage: deploy script: - ./sh startup.sh
#!/bin/bash docker login -u root -p <pass> docker pull <image> docker-compose up -d