Best practice of performance and security in deployment
I have a general question. What is the best practices of security and performance on a deployment? If I want to specific it to a technology I can said Kubernetes and Docker and AWS. I know it is very general question But It help me a lot.
See also questions close to this topic
react kubernetes deployment, port 80 only works
I'm learning how kubernetes works, and I've deployed a basic react app (using create-react-app).
In my yaml file I've set containerPort: 80, and then used a NodePort service targeting port 80. Everything works fine.
BUT. Why does it only work with port 80? I've tried containerPort 3000, doesn't work. Nor 8080 etc.
Is there something special about port 80? Why does it only work when I use that port?
Below is my yaml file:
apiVersion: apps/v1 kind: Deployment metadata: name: my-app namespace: default spec: replicas: 1 selector: matchLabels: name: my-app template: metadata: labels: name: my-app spec: containers: - name: my-app image: <my repo>/my-app ports: - containerPort: 80 restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: my-app namespace: default spec: type: NodePort selector: name: my-app ports: - port: 80 targetPort: 80 protocol: TCP nodePort: 30001
Docker installation failing in Linux Mint when trying to add the repository
I'm trying to install Docker and this is the error I get when I try to do it in Linux Mint.
adib@adib-Inspiron-15-3552:~$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ focal \ stable" WARNING:root:could not open file '/etc/apt/sources.list.d/additional-repositories.list'
Note that the reason why I have focal is because it's the ubuntu version of ulyana linux mint.
Can't connect to elasticache redis from docker container running in EC2
As a part of my CI process, I am creating a docker-machine EC2 instance and running 2 docker containers inside of it via docker-compose. The server test script attempts to connect to an AWS elasticache redis instance within the same VPC as the EC2. When the test script is run I get the following error:
1) Storage check cache connection should return seeded value in redis: Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/usr/src/app/test/scripts/test.js) at listOnTimeout (internal/timers.js:549:17) at processTimers (internal/timers.js:492:7)
When I try to connect via redis-cli from the EC2 itself:
Could not connect to Redis at ***.cache.amazonaws.com:6379: Connection timed out
Why cant I connect to my redis instance? Have I incorrectly configured my docker/aws setup? Any help would be appreciated.
Relevant section of my docker-compose.yml:
version: '3.8' services: server: build: context: ./ dockerfile: Dockerfile image: docker.pkg.github.com/$GITHUB_REPOSITORY/$REPOSITORY_NAME-server:github_ci_$GITHUB_RUN_NUMBER container_name: $REPOSITORY_NAME-server command: npm run dev ports: - "8080:8080" - "6379:6379" env_file: ./.env
Server container Dockerfile:
FROM node:12-alpine # create app dir WORKDIR /usr/src/app # install dependencies COPY package*.json ./ RUN npm install # bundle app source COPY . . EXPOSE 8080 6379 CMD ["npm", "run", "dev"]
How to secure my laravel Rest application
Hello i have develop a rest api with laravel and passport..How can i know if my application is secure ? The apis i think are secured because i use the token from passport and with every request i have to send my token to access the data.I am asking this question because i found that if someone type the url lets say myserver/env , he can see the env file with all the data (database connection).From htaccess i block it but i am asking here if i have to do more than that to secure it.Thanks for your time !!
Encrypting/Hashing a data what will be catched with GET?
I am studying about hashing/encrypting databases, but i have a question:
Lets suppose a app, what you can create a account, and save notes in your account, like Evernote. When you login, your password will be compared with the saved password (Bcrypt), and if your password is correct, you can login.
But, how we can encrypt/hashing the notes? because notes will be catched with a GET or a POST what will only have a Authorization header, how can i encrypt/hashing the notes table, and decode this data? And what algorithm do what i want?
OBS: I am using Node.JS for Backend.
Sorry if this question seems dumb.
What node field means in k8s volume.attachments?
I have the following
Name: pvc-c8a0c1ee-b9e6-11e9-9ffa-0cc47ab04738 Namespace: rook-ceph-system Labels: <none> Annotations: <none> API Version: rook.io/v1alpha2 Attachments: Cluster Name: rook-ceph Mount Dir: /var/lib/kubelet/pods/72fd4f89-5110-49b7-8d88-87488b58695c/volumes/ceph.rook.io~rook-ceph-system/pvc-c8a0c1ee-b9e6-11e9-9ffa-0cc47ab04738 Node: node-6.xyz.com Pod Name: dev-cockroachdb-0 Pod Namespace: x-namespace Read Only: false Kind: Volume Metadata: Creation Timestamp: 2020-08-12T17:13:51Z Generation: 6 Resource Version: 638003207 Self Link: /apis/rook.io/v1alpha2/namespaces/rook-ceph-system/volumes/pvc-c8a0c1ee-b9e6-11e9-9ffa-0cc47ab04738 UID: db0a9491-95fe-49cd-8160-89031847d636 Events: <none>
For the pod
dev-cockroachdb-0I'm getting the following error:
MountVolume.SetUp failed for volume "pvc-c8a0c1ee-b9e6-11e9-9ffa-0cc47ab04738" : mount command failed, status: Failure, reason: Rook: Mount volume failed: failed to attach volume pvc-c8a0c1ee-b9e6-11e9-9ffa-0cc47ab04738 for pod x-namespace/dev-cockroachdb-0. Volume is already attached by pod x-namespace/dev-cockroachdb-0. Status Pending
And the pod
x-namespace/dev-cockroachdb-0is currently scheduled to
So, as you can see the pod itself is in a different node than the
Volume.Attachmentspoint to the node on which the pod (to which the volume is attached) is located? (So if the volume is attached to a pod on node
NodeA, then the the value of node field for the volume attachment will be
- May this error happen because of the failure to correctly detach the volume on some node?
Kubernetes dashboard deployment exists, pod not being created
Our team was trying to fix some issues with the Kubernetes dashboard because it couldn’t get a secret. We are using dashboard version 1.8.3 and the Kubernetes server version is version 1.9.
In order to check if it was an issue that could be solved by reinstalling the dashboard, I ran the command
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.8.3/src/deploy/recommended/kubernetes-dashboard.yaml
Then the command
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.8.3/src/deploy/recommended/kubernetes-dashboard.yaml
However, I ended up being unable to recreate the Kubernetes pod. I'm not sure why the deployment refuses to generate the values. Here is the output from
kubectl describe deployment kubernetes-dashboard -n kube-system
showing that there is one replica desired but none created.
Name: kubernetes-dashboard Namespace: kube-system CreationTimestamp: <hidden> Labels: addonmanager.kubernetes.io/mode=Reconcile k8s-app=kubernetes-dashboard kubernetes.io/cluster-service=true Annotations: Selector: k8s-app=kubernetes-dashboard Replicas: 1 desired | 0 updated | 0 total | 0 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: k8s-app=kubernetes-dashboard Service Account: kubernetes-dashboard Containers: kubernetes-dashboard: Image: k8s-gcrio.azureedge.net/kubernetes-dashboard-amd64:v1.8.3 Port: 8443/TCP Host Port: 0/TCP Args: --auto-generate-certificates --heapster-host=http://heapster.kube-system:80 Limits: cpu: 500m memory: 500Mi Requests: cpu: 300m memory: 150Mi Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /certs from kubernetes-dashboard-certs (rw) Volumes: kubernetes-dashboard-certs: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> OldReplicaSets: <none> NewReplicaSet: <none> Events: <none>
How do I create the pod and have the dashboard working again?
Connection string problem when deploying ASP.NET MVC website to PLESK
I'm going through a basic tutorial for ASP.NET MVC based on this Microsoft Help document. The app displays books and authors from a database and allows for CRUD operations on that data. Everything is working fine on my machine when I run locally developing with Visual Studio 2017. But when I try to to deploy to my PLESK hosted website I'm having problems. I recreated (from scratch) a MySQL database designed to exactly replicate the schema of local database and populated it with sample values. When the page loads, I'm getting a long hang and ultimately a 500 error after it tries unsuccessfully to connect to the database.
This is the connection string that is working locally:
<add name="BookServiceContext" connectionString="Data Source=(localdb)\MSSQLLocalDB; Initial Catalog=BookServiceContext-20200803155333; Integrated Security=True; MultipleActiveResultSets=True; AttachDbFilename=|DataDirectory|BookServiceContext-20200803155333.mdf" providerName="System.Data.SqlClient" />
And here's the connectionStrings part of the remote web.config:
<add name="BookServiceContext" connectionString="Server=###.##.###.##;Database=BookServiceContext;Uid=myPLESKUserIDAssociatedWithDatabase;Pwd=PasswordForTheUser;multipleactiveresultsets=True" providerName="System.Data.SqlClient" />
Perhaps I have the syntax wrong or am missing a setting in PLESK (I was trying to use a sample from ConnectionStrings.com and haven't set up a DB in PLESK before).
Azure Functions - Deployment Slots - Effects of "Remove additional Files at Destination" Unchecked
This probably a weird situation. All the posts I have looked on this topic is the other way around where they want to check this "remove additional files" but in my case I want to have it unchecked but that is giving problems at later stages. To give some context around
We are building around 15 to 20 Azure functions as a wrapper API on top of Dynamics CRM APIs. So the 2 options evaluated are
a) Create each function in it's own function app - this gives us maintenance issue (20 URLs for Dev, SIT,UAT, Stage, Prod, Training is a considerable mess to tackle for along with their managed identities, app registrations etc etc) ,another key reason not to consider this approach is the consumption plan's warm up issues. It is unlikely that all these functions are heavily used but some of them are.
b) the second option, keeping all functions under 1 big function app. This is most preferred way for us as it will take care of most of the above issues. However, the problem with this we observed is - if we have to deploy 1 function, we have to wait for all the functions to be tested and approved and then deploy all functions even if the requirement is to deploy only one function. This is totally a No-No from architectural point of view.
So, we adapted a hybrid approach - in Visual studio, we still maintain multiple function app projects but during the deployment all these functions will be deployed in to Single Function App by using Web Deploy and Unchecking "Remove additional files in target"
The problem now
this is all worked very well for us during our POC, however now when we started deploying using pipe lines in to Staging slot it is becoming problem for us. Let's say when we first deployed function 1 to staging, swap it to production - the stage now has 0 functions and prod has 1 function. Then when we deploy the 2nd azure function, stage now has only 2nd function and if we swap it with production now, the production will get only 2nd azure function and we miss the 1st Azure function totally from production.
Plz let me know if any further details required.
DevOp Pipeline: Error: More than one package matched with specified pattern: D:\a\r1\a\**\*.zip
##[error]Error: More than one package matched with specified pattern: D:\a\r1\a***.zip. Please restrain the search pattern.
I am new to DevOp. I am not quite know how to configure the yaml in the pipeline. By default, there was a template created and i added archives and publish task inside like this.
trigger: - master pool: vmImage: 'windows-latest' variables: solution: '**/*.sln' buildPlatform: 'Any CPU' buildConfiguration: 'Release' steps: - task: NuGetToolInstaller@1 - task: NuGetCommand@2 inputs: restoreSolution: '$(solution)' - task: VSBuild@1 inputs: solution: '$(solution)' msbuildArgs: '/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:DesktopBuildPackageLocation="$(build.artifactStagingDirectory)\WebApp.zip" /p:DeployIisAppPath="Default Web Site"' platform: '$(buildPlatform)' configuration: '$(buildConfiguration)' - task: VSTest@2 inputs: platform: '$(buildPlatform)' configuration: '$(buildConfiguration)' - task: ArchiveFiles@2 inputs: rootFolderOrFile: '$(Build.BinariesDirectory)' includeRootFolder: true archiveType: 'zip' archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip' replaceExistingArchive: true - task: PublishBuildArtifacts@1 inputs: PathtoPublish: '$(Build.ArtifactStagingDirectory)' ArtifactName: 'drop' publishLocation: 'Container'
I guess there is something to do with
Unsure what i need to configure.
Drone CI runner can't find gitea server
I am trying to run a gitea server with drone. They are currently both hosted on the same ubuntu machine and the docker containers are set up through a docker-compose.yml file.
When starting up all services I get the following error in the logs of the drone runner service:
time="2020-08-12T19:10:42Z" level=error msg="cannot ping the remote server" error="Post http://drone:80/rpc/v2/ping: dial tcp: lookup drone on 127.0.0.11:53: no such host"
Both http://gitea and http://drone point to localhost (via /etc/hosts). I sadly don't understand how or why the drone runner can not find the server. Calling "docker container inspect" on all my 4 containers shows they are all connected to the same network (drone_and_gitea_giteanet). Which is also the network I set in the DRONE_RUNNER_NETWORKS environment variable.
This is how my docker-compose.yml file looks:
version: "3.8" # Create named volumes for gitea server, gitea database and drone server volumes: gitea: gitea-db: drone: # Create shared network for gitea and drone networks: giteanet: external: false services: gitea: container_name: gitea image: gitea/gitea:1 #restart: always environment: - APP_NAME="Automated Student Assessment Tool" - USER_UID=1000 - USER_GID=1000 - ROOT_URL=http://gitea:3000 - DB_TYPE=postgres - DB_HOST=gitea-db:5432 - DB_NAME=gitea - DB_USER=gitea - DB_PASSWD=gitea networks: - giteanet ports: - "3000:3000" - "222:22" volumes: - gitea:/data - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro depends_on: - gitea-db gitea-db: container_name: gitea-db image: postgres:9.6 #restart: always environment: - POSTGRES_USER=gitea - POSTGRES_PASSWORD=gitea - POSTGRES_DB=gitea networks: - giteanet volumes: - gitea-db:/var/lib/postgresql/data drone-server: container_name: drone-server image: drone/drone:1 #restart: always environment: # General server settings - DRONE_SERVER_HOST=drone:80 - DRONE_SERVER_PROTO=http - DRONE_RPC_SECRET=topsecret # Gitea Config - DRONE_GITEA_SERVER=http://gitea:3000 - DRONE_GITEA_CLIENT_ID=<CLIENT ID> - DRONE_GITEA_CLIENT_SECRET=<CLIENT SECRET> # Create Admin User, name should be the same as Gitea Admin user - DRONE_USER_CREATE=username:AdminUser,admin:true # Drone Logs Settings - DRONE_LOGS_PRETTY=true - DRONE_LOGS_COLOR=true networks: - giteanet ports: - "80:80" volumes: - drone:/data depends_on: - gitea drone-agent: container_name: drone-agent image: drone/drone-runner-docker:1 #restart: always environment: - DRONE_RPC_PROTO=http - DRONE_RPC_HOST=drone:80 - DRONE_RPC_SECRET=topsecret - DRONE_RUNNER_CAPACITY=1 - DRONE_RUNNER_NETWORKS=drone_and_gitea_giteanet networks: - giteanet volumes: - /var/run/docker.sock:/var/run/docker.sock depends_on: - drone-server
It would help me a lot if somebody could maybe take a look at the issue and help me out! :)
set hardware Target through Jenkins pipeline
I am trying to select an user defined target hardware for codegeneration using Jenkins pipeline script. The code generator used is that from dSPACE-Targetlink. I can select the user defined target hardware if i use the code generator manually. But if i use Jenkins job (scheduled automated code generation), i get the error during export of the generated code, that the target hardware is not recognised. is there a solution or a work-around for this?