AWS beanstalk Amazon Linux 2 log file permissions
I'm migrated from AL1 to AL2 on AWS Beanstalk. AL2 changed location of my nodejs.log to /var/log/{{.}}.stdout.log
I resolved this by adding ryslog.config to .ebexetensions:
files:
"/opt/elasticbeanstalk/config/private/rsyslog.conf.template":
mode: "000644"
owner: root
group: root
content: |
# This rsyslog file redirects Elastic Beanstalk platform logs.
# Logs are initially sent to syslog, but we also want to divide
# stdout and stderr into separate log files.
template(name="SimpleFormat" type="string" string="%msg%\n")
$EscapeControlCharactersOnReceive off
{{range .ProcessNames}}if $programname == '{{.}}' then {
*.=warning;*.=err;*.=crit;*.=alert;*.=emerg /var/log/nodejs/nodejs.log; SimpleFormat
*.=info;*.=notice /var/log/nodejs/nodejs.log; SimpleFormat
}
{{end}}
Above configuration is working but I have problem with log file permissions. Directory /var/log/nodejs and nodejs.log file are only readable by root (chmod 600), and cloudwatch-agent can't read it. Changing permissions manually do the job, but how can I change the permissions to be created automatically on beanstalk deploy?
do you know?
how many words do you know
See also questions close to this topic
-
Upload file from html when block public access is true
I am using
django-s3direct
to file uploadhttps://github.com/bradleyg/django-s3direct
Using IAM role setting because I upload the file from the server on ECS container.
Now I set the
blockPublicAccess
ofS3
false.When uploading images from html, there comes error.
https://s3.ap-northeast-1.amazonaws.com/static-resource-v/images/c64d6e593de44aa5b10dcf1766582547/_origin.jpg?uploads (403 (Forbidden) ) initiate error: static-resource-v/line-assets/images/c64d6e593de44aa5b10dcf1766582547/_origin.jpg AWS Code: AccessDenied, Message:Access Deniedstatus:403
OK, it is understandable.
Browser try to access the for initiation.
However there is any way to upload file from browser when blockPublicAccess is true??
-
Linux on Lightsail instance is asking for a password and it's not working
I'm trying to restart
mariaDB
on Ubuntu but it's not letting me.I enter:
systemctl restart mariadb
and get:
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units === Authentication is required to restart 'mariadb.service'. Authenticating as: Ubuntu (ubuntu) Password: polkit-agent-helper-1: pam_authenticate failed: Authentication failure ==== AUTHENTICATION FAILED ===
I have the same password for all functions so I do not understand why it is not working. What can I do?
-
AWS Pinpoint sendMessages() Addresses param field error
I'm having trouble replicating the format of the params object's Addresses format in a way where I can easily add to the object.
If I use this as the params with
destinationNumber[0]
anddestinationNumber[1]
in the format of 1 + 9 digit number ie13334535667
then it sends the message to both numbers no problem.const params = { ApplicationId: applicationId, MessageRequest: { Addresses: { [destinationNumber[0]]: { ChannelType: 'SMS' }, [destinationNumber[1]]: { ChannelType: 'SMS' } }, MessageConfiguration: { SMSMessage: { Body: message, Keyword: registeredKeyword, MessageType: messageType, OriginationNumber: originationNumber } } } };
I'm trying to replicate this format for
Addresses
, but I'm gettingUnexpected key '13334535667' found in params.MessageRequest.Addresses['0']
. The format my console output shows for Addresses is[ { '12345678910': { ChannelType: 'SMS' } }, { '12345678911': { ChannelType: 'SMS' } } ]
I'm using a map to call this
function createPhoneMessagingObject(phoneNumber: string) { return { [phoneNumber]: { ChannelType: 'SMS' } }; }
I tried wrapping key in array like in phone object, but per the output, the brackets goes away so maybe there's an easier/more correct way of doing this. I appreciate any help!
-
Elasticbeanstalk - Nginx "connect() to unix:/run/php-fpm/www.sock failed"
My
x_optimize_php.sh
file is below#!/bin/bash # This file will make sure that will set the max processes and spare processes # according to the details provided by this machine instance. DEFAULT_PROCESS_MEMORY="120" MAX_REQUESTS="500" PROCESS_MAX_MB=$(ps --no-headers -o "rss,cmd" -C php-fpm | awk '{ sum+=$1 } END { printf ("%d\n", sum/NR/1024) }') || $DEFAULT_PROCESS_MEMORY VCPU_CORES=$(($(lscpu | awk '/^CPU\(s\)/{ print $2 }'))) TOTAL_MEMORY_IN_KB=$(free | awk '/^Mem:/{print $2}') USED_MEMORY_IN_KB=$(free | awk '/^Mem:/{print $3}') FREE_MEMORY_IN_KB=$(free | awk '/^Mem:/{print $4}') TOTAL_MEMORY_IN_MB=$(($TOTAL_MEMORY_IN_KB / 1024)) USED_MEMORY_IN_MB=$(($USED_MEMORY_IN_KB / 1024)) FREE_MEMORY_IN_MB=$(($FREE_MEMORY_IN_KB / 1024)) MAX_CHILDREN=$(($FREE_MEMORY_IN_MB / $PROCESS_MAX_MB)) # Optimal would be to have at least 1/4th of the children filled with children waiting to serve requests. START_SERVERS=$(($MAX_CHILDREN / 4)) MIN_SPARE_SERVERS=$(($MAX_CHILDREN / 4)) # Optimal would be to have at most 3/4ths of the children filled with children waiting to serve requests. MAX_SPARE_SERVERS=$(((3 * $MAX_CHILDREN) / 4)) sudo sed -i "s|pm.max_children.*|pm.max_children = $MAX_CHILDREN|g" /etc/php-fpm.d/www.conf sudo sed -i "s|pm.start_servers.*|pm.start_servers = $START_SERVERS|g" /etc/php-fpm.d/www.conf sudo sed -i "s|pm.min_spare_servers.*|pm.min_spare_servers = $MIN_SPARE_SERVERS|g" /etc/php-fpm.d/www.conf sudo sed -i "s|pm.max_spare_servers.*|pm.max_spare_servers = $MAX_SPARE_SERVERS|g" /etc/php-fpm.d/www.conf printf "\npm.max_requests = $MAX_REQUESTS" | sudo tee -a /etc/php-fpm.d/www.conf # Restarting the services afterwards. sudo systemctl restart php-fpm.service sudo systemctl restart nginx.service
And it returns error below, suddenly. My server was working with no errors more than 1 year. And today, It begins to return error like below.
2022/05/06 13:21:54 [error] 4460#4460: *3393 connect() to unix:/run/php-fpm/www.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: #, server: , request: "GET /api/product/blabla HTTP/1.1", upstream: "fastcgi://unix:/run/php-fpm/www.sock:", host: "blabla"
Whats wrong with it? I've tried everything, restarted server etc. And when I rebuild server, after 2 minutes, It returns error like that.
On monitoring my ebs environment; there looks no 'load'. it uses %5 of cpu. What could be wrong? what should I do?
-
Add same message and notification to multiple user ids in database without using multiple arrays of data, instead use only the array of user ids
I know that to insert data in mysql, I can utilize 3 different methods for multiple entries, efficient one being:
INSERT INTO TABLE(
COLUMN1,
COLUMN2,
COLUMN3`) VALUES('ID1', 'SAME TITLE', 'SAME MESSAGE'),
('ID2', 'SAME TITLE', 'SAME MESSAGE'),
('ID3', 'SAME TITLE', 'SAME MESSAGE'),
.......
('ID1000000', 'SAME TITLE', 'SAME MESSAGE');
But this query will also take a long time while my other queries will also be executing on the server, where some of my queries take 3-4 seconds to return data on a average online user base of 10000. So, is there a way to write a query in such a way that for all the rows, I don't need to pass the SAME TITLE and SAME MESSAGE and it only needs the array of user ids which I can send in chunks of suppose 10k or 20k at a time, which may reduce the overall data size sent to the RDS.
Please suggest. I may sound hypothetical, as I also have not seen such before, but looking for any optimization possibility to any extent.
-
Logs from Serverless framework is not showing on AWS CloudWatch
I am using serverless framework, AWS CodeCommit, CodeBuild, and CodePipeline. When I push my code and CodeBuild starts to deploy it, I don't get any feedback or logs from serverless framework inside the CloudWatch log groups.
I am using the default service roles for CodeBuild and CodePipeline which are created by AWS when I first created a new PipeLine and CodeBuild. Both of those roles include polices for CloudWatch and create log groups as follows:
CodeBuild
"Statement": [ { "Effect": "Allow", "Resource": [ "arn:aws:logs:us-west-2:*****:log-group:/aws/codebuild/sis-notes-backend-codebuild", "arn:aws:logs:us-west-2:*****:log-group:/aws/codebuild/sis-notes-backend-codebuild:*" ], "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ] },
CodePipeline
"Action": [ "elasticbeanstalk:*", "ec2:*", "elasticloadbalancing:*", "autoscaling:*", "cloudwatch:*", "s3:*", "sns:*", "cloudformation:*", "rds:*", "sqs:*", "ecs:*" ], "Resource": "*", "Effect": "Allow" },
And this is the output of CloudWatch log groups. As you can see that I've wrote rubbish in the deploy code in order to get an error or failed response back from Serverless, but I got nothing just empty lines.
buildspec.yml
version: 0.2 phases: install: commands: - echo Installing Serverless - npm install -g serverless pre_build: commands: - echo Install source NPM dependencies - npm install build: commands: - echo Deployment started on `date` - echo Deploying with the Serverless Framework - sls deploy -v -s $ENV_NAMEss kklksadk post_build: commands: - echo Deployment completed on `date`
serverless.yml
service: sls-notes-backend frameworkVersion: '3' provider: name: aws runtime: nodejs14.x region: us-west-2 stage: prod memorySize: 128 timeout: 4 endpointType: regional environment: NOTES_TABLE: ${self:service}-${opt:stage, self:provider.stage} resources: Resources: NotesTable: Type: AWS::DynamoDB::Table DeletionPolicy: Retain Properties: TableName: ${self:provider.environment.NOTES_TABLE} AttributeDefinitions: - AttributeName: user_id AttributeType: S - AttributeName: timestamp AttributeType: N - AttributeName: note_id AttributeType: S KeySchema: - AttributeName: user_id KeyType: HASH - AttributeName: timestamp KeyType: RANGE ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1 GlobalSecondaryIndexes: - IndexName: note_id_index KeySchema: - AttributeName: note_id KeyType: HASH Projection: ProjectionType: ALL ProvisionedThroughput: ReadCapacityUnits: 2 WriteCapacityUnits: 2
-
How to create a CloudWatch alarm from log group metric filter
I'm trying to create a CloudWatch alarm from a log group metric filter. I have created the metric filter but I'm unable to setup an alarm as no data seems to be graphed.
I am trying to setup a metric filter to track 502 errors from our ECS container logs.
I go to CloudWatch > Log groups and select our group 'example-ecs'.
This group contains our log stream from our ECS containers. There are many as when the website is deployed a new stream is created. I think is is expected, there are 100s of logs.
web/example-task-production/1XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX 2022-04-14 13:54:14 (UTC+02:00) web/example-task-production/2XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX 2022-05-05 12:09:00 (UTC+02:00) web/example-task-production/3XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX 2022-04-04 18:11:03 (UTC+02:00) web/example-task-production/4XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX 2022-04-05 09:47:15 (UTC+02:00)
If I 'search all' with the following filter:
[timestamp, timezone, server, from, ip, request, method, url, response, http, codetitle, code=502, bytes, sent, time]
I get these search results (as expected):
05/Apr/2022:16:04:28 +0000 Server: From: XXX.XX.X.XXX Request: POST https://example.com/broken/page Response: HTTP Code: 502 Bytes Sent: 315 Time: 0.042 05/Apr/2022:16:42:02 +0000 Server: From: XXX.XX.X.XXX Request: POST https://example.com/broken/page Response: HTTP Code: 502 Bytes Sent: 315 Time: 0.062 05/Apr/2022:19:14:50 +0000 Server: From: XXX.XX.X.XXX Request: POST https://example.com/broken/page Response: HTTP Code: 502 Bytes Sent: 315 Time: 0.043
I then created a metric filter using this filter pattern. With the following settings:
Filter pattern:
[timestamp, timezone, server, from, ip, request, method, url, response, http, codeTitle, code=502, bytes, sent, time]
The 'Test pattern' also matches the test above.
Filter name: HTTP502Errors
Metric namespace: ExampleMetric
Metric name: ServerErrorCount
Metric value: 1
Default value – optional: 0
Unit – optional: CountI should have 5 entries in the logs within the last 24 hours. When I try and graph this new metric or create an alarm there seems to be no data in it. How do I make this work?
-
Enable log rotation in rsyslog
How to enable log rotation in rsyslog configuration. The method described in the official documentation of rsyslog using output channels is not working for me.
The script given in the official documentation of rsyslog for output channel is available here: https://www.rsyslog.com/doc/master/tutorials/log_rotation_fix_size.html
module(load="imudp" TimeRequery="500") module(load="omstdout") module(load="omelasticsearch") module(load="mmjsonparse") module(load="mmutf8fix") ruleset(name="prismaudit_rs") { action(type="omfile" dirCreateMode="0777" fileCreateMode="0777" file="/logs/prismaudit.log") } $outchannel log_rotation,/logs/prismaudit.log, 3000,/etc/log_rotation_script *.* :omfile:$log_rotation #input(type="imptcp" port="514") input(type="imudp" port="514" ruleset="prismaudit_rs")
This is the snippet of code I am using. I have also tried adding the outputchannel part of code inside the ruleset(after action statement).
-
Rsyslog collect logs from different timezones
Im using rsyslog on server to collect logs from remote hosts.
Collect server config:
# timedatectl Local time: Wed 2022-04-27 16:02:43 MSK Universal time: Wed 2022-04-27 13:02:43 UTC RTC time: n/a Time zone: Europe/Moscow (MSK, +0300) System clock synchronized: yes NTP service: inactive RTC in local TZ: no # cat /etc/rsyslog.d/20_external.conf $CreateDirs on $PreserveFQDN on # provides UDP syslog reception module(load="imudp") input(type="imudp" port="514") # provides TCP syslog reception module(load="imtcp") input(type="imtcp" port="514") template( name="external" type="string" string="/var/log/external/%HOSTNAME%/%syslogfacility-text%.%programname%.%syslogseverity-text%.log" ) action( type="omfile" dirCreateMode="0775" FileCreateMode="0644" dynaFile="external" )
On remote host
# timedatectl Local time: Wed 2022-04-27 13:04:03 UTC Universal time: Wed 2022-04-27 13:04:03 UTC RTC time: n/a Time zone: UTC (UTC, +0000) System clock synchronized: yes NTP service: inactive RTC in local TZ: no # cat /etc/rsyslog.d/10-external.conf *.* @rserver # logger "hello, local time $(date)"
And get on rsyslogserver:
cat /var/log/external/ruser.home.xmu/user.root.notice.log 2022-04-27T13:07:06+03:00 ruser.home.xmu root: hello, local time 2022-04-27T13:07:06 UTC # date 2022-04-27T16:08:56 MSK
What i can do for change time zone settings for some remote hosts on collect-server?
When i reserch incedents from all servers the time does not match in logs. I want the time on the collector in the logs to be in his time zone.
2022-04-27T16:07:06+03:00 ruser.home.xmu root: hello, local time 2022-04-27T13:07:06 UTC
-
log4j2.properties syslog is not wrking
My java application problem is that log4j2 syslog is written not in 'local1.log' but 'messages'. My /etc/rsyslog.conf is configured 'local1.* /var/log/local1.log' in /etc/rsyslog.conf.
But One of weired is when I removed 'appender.syslog.layout.type' and 'appender.syslog.layout.pattern' from log4j2.properties, syslog starts being written in /var/log/local1.log correctly.
Is my configuration incorrect?
Are layout properties not applied in syslog?
[/etc/rsyslog.conf]
# Log anything (except mail) of level info or higher. # Don't log private authentication messages! *.info;mail.none;authpriv.none;cron.none;local1.none /var/log/messages # The authpriv file has restricted access. authpriv.* /var/log/secure ... local1.* /var/log/local1.log
[Used log4j2 library]
log4j-api-2.17.2.jar log4j-core-2.17.2.jar
[log4j2.properties]
status = warn name = Test # Console appender configuration appender.console.type = Console appender.console.name = consoleLogger appender.console.layout.type = PatternLayout appender.console.layout.pattern = %d{HH:mm:ss} %5p (%c{1} - %M:%L) - %m%n appender.syslog.type = Syslog appender.syslog.name = sysLogger appender.syslog.host = localhost appender.syslog.port = 514 appender.syslog.protocol = UDP appender.syslog.facility = LOCAL1 appender.syslog.layout.type = PatternLayout appender.syslog.layout.pattern = %c{1} (%M:%L) %m\n # Root logger level rootLogger.level = debug rootLogger.appenderRefs = consoleLogger, sysLogger rootLogger.appenderRef.stdout.ref = consoleLogger rootLogger.appenderRef.syslog.ref = sysLogger