Cloudwatch Grafana
The image in the link describes that grafana can display the request metric similar to the cloudwatch on the left-hand side of the image, but the 4xx metric is not being displayed at all. https://i.stack.imgur.com/tTLKx.png
In the 2nd photo, it can be seen that both of the metrics are in the same application. https://i.stack.imgur.com/SbSHi.png
Here are some additional details on the metric query https://i.stack.imgur.com/BiKkC.png
https://i.stack.imgur.com/GXSoQ.png
Here's the error code when I'm using an imported dashboard template: https://grafana.com/grafana/dashboards/590 I tried searching the error code but I'm on my wits end.
https://i.stack.imgur.com/Aki2K.png
message:"metric request error: "InvalidParameter: 1 validation error(s) found.\n- minimum field size of 1, GetMetricDataInput.MetricDataQueries[0].MetricStat.Metric.Dimensions[0].Value.\n""
do you know?
how many words do you know
See also questions close to this topic
-
change to promtail yaml config not reflected in grafana/loki
I've made a revision to a /usr/local/bin/config-promtail.yaml file on server a, something like this:
- job_name: error_file static_configs: - targets: - localhost labels: job: bridge_errors __path__: home/user/logs/new_error_file.json host: server_a
where 'new_error_file.json' used to be 'old_error_file.json'. This is sending the logs to Loki/Grafana running on server b. I could see the old error file(s) in the Loki data adapter in Grafana just fine, but then updated the file name in the yaml config, as above, restarted the Promtail service and...nothing. The new files are not showing up. I can check the status of the service and can see " level=info msg="Seeked..." entries for the new file, so it's working at least as far as the Promtail service in concerned. Any ideas?? I'm a bit new to Grafana (like, only a couple days' in), so am probably missing something?? Thanks for any assistance!
-
Helm Loki Stack additional promtail config
I install loki and prometheus using helm. However, I would like to replace the logs in one place. I have used: helm show values grafana/loki-stack > loki-stack-values.yml to output the values and came to the following result:
loki: enabled: true isDefault: true promtail: enabled: true config: lokiAddress: http://{{ .Release.Name }}:3100/loki/api/v1/push prometheusSpec: additionalScrapeConfigs: - match: selector: '{name="promtail"}' stages: # The regex stage parses out a level, timestamp, and component. At the end # of the stage, the values for level, timestamp, and component are only # set internally for the pipeline. Future stages can use these values and # decide what to do with them. - regex: expression: '.*level=(?P<level>[a-zA-Z]+).*ts=(?P<timestamp>[T\d-:.Z]*).*component=(?P<component>[a-zA-Z]+)'
Actually, everything would work great. But my output is really weird so I try to add the the additionalScapeConfig
2022-05-06 18:31:55 {"log":"2022-05-06T18:31:55,003 \u001b[36mDEBUG\u001b[m
So to the question:
How can I use helm install dlp-dev-loki grafana/loki-stack --values loki-stack-values.yml -n dev. and additional scape configs for promtail.
-
Can we perform outer join on a table from MySQL with data from ElasticSearch based on a field name in the same panel of grafana dashboard
Suppose we have a MySQL table 'student' with column names: 'class_id', 'roll_no', 'student_name'.
And we have ElasticSearch data for 'class' with field names: 'class_id', 'class_name', 'school_name'.
If I want to perform outer join on these two data based on 'class_id', so that 'class_id', 'roll_no', 'student_name', 'class_name', 'school_name' should be displayed in tabular format in same panel of grafana dashboard.
How this can achieved, known that there are two datasources configured in the grafana dashboard: 1) for MySQL 2) ElasticSearch?
-
Logs from Serverless framework is not showing on AWS CloudWatch
I am using serverless framework, AWS CodeCommit, CodeBuild, and CodePipeline. When I push my code and CodeBuild starts to deploy it, I don't get any feedback or logs from serverless framework inside the CloudWatch log groups.
I am using the default service roles for CodeBuild and CodePipeline which are created by AWS when I first created a new PipeLine and CodeBuild. Both of those roles include polices for CloudWatch and create log groups as follows:
CodeBuild
"Statement": [ { "Effect": "Allow", "Resource": [ "arn:aws:logs:us-west-2:*****:log-group:/aws/codebuild/sis-notes-backend-codebuild", "arn:aws:logs:us-west-2:*****:log-group:/aws/codebuild/sis-notes-backend-codebuild:*" ], "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ] },
CodePipeline
"Action": [ "elasticbeanstalk:*", "ec2:*", "elasticloadbalancing:*", "autoscaling:*", "cloudwatch:*", "s3:*", "sns:*", "cloudformation:*", "rds:*", "sqs:*", "ecs:*" ], "Resource": "*", "Effect": "Allow" },
And this is the output of CloudWatch log groups. As you can see that I've wrote rubbish in the deploy code in order to get an error or failed response back from Serverless, but I got nothing just empty lines.
buildspec.yml
version: 0.2 phases: install: commands: - echo Installing Serverless - npm install -g serverless pre_build: commands: - echo Install source NPM dependencies - npm install build: commands: - echo Deployment started on `date` - echo Deploying with the Serverless Framework - sls deploy -v -s $ENV_NAMEss kklksadk post_build: commands: - echo Deployment completed on `date`
serverless.yml
service: sls-notes-backend frameworkVersion: '3' provider: name: aws runtime: nodejs14.x region: us-west-2 stage: prod memorySize: 128 timeout: 4 endpointType: regional environment: NOTES_TABLE: ${self:service}-${opt:stage, self:provider.stage} resources: Resources: NotesTable: Type: AWS::DynamoDB::Table DeletionPolicy: Retain Properties: TableName: ${self:provider.environment.NOTES_TABLE} AttributeDefinitions: - AttributeName: user_id AttributeType: S - AttributeName: timestamp AttributeType: N - AttributeName: note_id AttributeType: S KeySchema: - AttributeName: user_id KeyType: HASH - AttributeName: timestamp KeyType: RANGE ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1 GlobalSecondaryIndexes: - IndexName: note_id_index KeySchema: - AttributeName: note_id KeyType: HASH Projection: ProjectionType: ALL ProvisionedThroughput: ReadCapacityUnits: 2 WriteCapacityUnits: 2
-
AWS beanstalk Amazon Linux 2 log file permissions
I'm migrated from AL1 to AL2 on AWS Beanstalk. AL2 changed location of my nodejs.log to
/var/log/{{.}}.stdout.log
I resolved this by adding ryslog.config to .ebexetensions:files: "/opt/elasticbeanstalk/config/private/rsyslog.conf.template": mode: "000644" owner: root group: root content: | # This rsyslog file redirects Elastic Beanstalk platform logs. # Logs are initially sent to syslog, but we also want to divide # stdout and stderr into separate log files. template(name="SimpleFormat" type="string" string="%msg%\n") $EscapeControlCharactersOnReceive off {{range .ProcessNames}}if $programname == '{{.}}' then { *.=warning;*.=err;*.=crit;*.=alert;*.=emerg /var/log/nodejs/nodejs.log; SimpleFormat *.=info;*.=notice /var/log/nodejs/nodejs.log; SimpleFormat } {{end}}
Above configuration is working but I have problem with log file permissions. Directory /var/log/nodejs and nodejs.log file are only readable by root (chmod 600), and cloudwatch-agent can't read it. Changing permissions manually do the job, but how can I change the permissions to be created automatically on beanstalk deploy?
-
How to create a CloudWatch alarm from log group metric filter
I'm trying to create a CloudWatch alarm from a log group metric filter. I have created the metric filter but I'm unable to setup an alarm as no data seems to be graphed.
I am trying to setup a metric filter to track 502 errors from our ECS container logs.
I go to CloudWatch > Log groups and select our group 'example-ecs'.
This group contains our log stream from our ECS containers. There are many as when the website is deployed a new stream is created. I think is is expected, there are 100s of logs.
web/example-task-production/1XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX 2022-04-14 13:54:14 (UTC+02:00) web/example-task-production/2XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX 2022-05-05 12:09:00 (UTC+02:00) web/example-task-production/3XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX 2022-04-04 18:11:03 (UTC+02:00) web/example-task-production/4XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX 2022-04-05 09:47:15 (UTC+02:00)
If I 'search all' with the following filter:
[timestamp, timezone, server, from, ip, request, method, url, response, http, codetitle, code=502, bytes, sent, time]
I get these search results (as expected):
05/Apr/2022:16:04:28 +0000 Server: From: XXX.XX.X.XXX Request: POST https://example.com/broken/page Response: HTTP Code: 502 Bytes Sent: 315 Time: 0.042 05/Apr/2022:16:42:02 +0000 Server: From: XXX.XX.X.XXX Request: POST https://example.com/broken/page Response: HTTP Code: 502 Bytes Sent: 315 Time: 0.062 05/Apr/2022:19:14:50 +0000 Server: From: XXX.XX.X.XXX Request: POST https://example.com/broken/page Response: HTTP Code: 502 Bytes Sent: 315 Time: 0.043
I then created a metric filter using this filter pattern. With the following settings:
Filter pattern:
[timestamp, timezone, server, from, ip, request, method, url, response, http, codeTitle, code=502, bytes, sent, time]
The 'Test pattern' also matches the test above.
Filter name: HTTP502Errors
Metric namespace: ExampleMetric
Metric name: ServerErrorCount
Metric value: 1
Default value – optional: 0
Unit – optional: CountI should have 5 entries in the logs within the last 24 hours. When I try and graph this new metric or create an alarm there seems to be no data in it. How do I make this work?
-
Track API requests to each pod on my kubernetes cluster
I am trying to track all API requests to my kubernetes cluster running on some ec2 instances. How do I go about doing this?
I tried using prometheus but have not had any luck so far.
-
How get kubernetes pod metrics periodically and append them to file
I am running a load test over a kubernetes pod and i want to sample every 5 minutes the CPU and memory usage of it. I was currently manually using the linux
top
command over the kubernetes pod.Is there any way given a
kubernetes pod
to fetch the CPU/Memory usage every X minutes and append it to a file ? -
Example of Logs-based Metrics API?
I'm having trouble finding the API documentation for creating/managing Logs-based Metrics. Specifically I'm trying to create User-defined metrics as I do in the GUI and then use them for Alerting.
I've looked through the Monitoring and Logging APIs and ostensibly https://cloud.google.com/monitoring/custom-metrics/creating-metrics is right, except listing MetricDescriptors does not list the items I want to create so I'm thinking it's incorrect.
If anyone can help provide the API for Logs-based Metrics, or the API for listing System-defined metrics or User-defined metrics that would help me a ton
-
How to sort alerts from Prometheus correctly in the Grafana panel
We have several alerts from Prometheus, we would like to visualize them in tables on the Grafana dashboard in the alert list. We would like to sort the alerts by severity and later by environment. So the severities are:
- critical
- warning
- info
- none
My question is simply how do we sort them in the Grafana dashboard, because alphabetically it doesn't make sense. Our Grafana version is 8.4.7, the Grafana configuration file grafana.ini is updated for the appropriate version. As a datasource we have a Prometheus. In Grafana, I set the Alert list as a suggestion, then specified grouping by severity and focused on sorting by importance for built-in support.
Please how best to fill in the label for the alert instance, so that the alerts are sorted by severity and environment ?
On higher environments it works according to this sorting, but on LAB where we have other environments like ALFA, BETA.... Is it necessary to change the Json ? On the higher environment we use in Alert instance label this
{environment~=“.*$env.*“}
. -
Azure Monitor datasource isnt working error "Azure Log Analytics requires access to Azure Monitor but had the following error: InternalServerError"
We are using the Azure-Monitor Datasource with Managed Identity. We are monitoring the Kubernetes with log analytics. We have given "Log Analytics Reader Role and Monitoring reader(subscription Level)". Till last Thursday it was working fine when I see today the data source is not working. Below is the permission that I have given now