ELK Heartbeat for dameon application which do not listen to a port
My org have several java dameon applications which are either
- Pure JMS listeners
- Running CRON jobs
And they do not expose a port. How can we monitor these applications using Heartbeat (or something similar) .
See also questions close to this topic
-
Using SQL in Kibana Dashboard
In Kibana Canvas, users can create metrics and data tables using (Elasticsearch) SQL.
How can users get the same metrics and data tables as created in Kibana Canvas, in Kibana Dashboard using (Elasticsearch) SQL?
-
Using Filebeats how do you update the fields.yml
I am running filebeats on Kube and have created my own module and mounted it. I can use the module and my pipelines and dashboards are uploaded to filebeats.
The problem I have is that the fields under my module and filesets aren't added to the fields.yml in
/usr/share/filebeat/fields.yml
, so my index template isn't created with the new fields I have added.How do I update this file with my fields from my
module/_meta/fields.yml
andmodule/fileset/_meta/fields.yml
files. Or how do I update the index template with the fields from my module?If I check the index template it doesn't include any of the fields from my module.
Thank you for any help on this!
-
send angular application logs to elastic search
I am trying to send angular logs(exceptions, errors) to elastic search. so basically we have a planned to create a intermediate logging api that would push the data to elastic search. I have the API written in .net and it can talk to elastic search. but my question is how can I assign properties in the log while I forward them over from angular to api. I don't see much example with that. my question is, Is this approach right (angular logs -> intermediate api -> elastic search) ? If so, can you please suggest how can I add indexing or timestamp when I send the logs from angular to intermediate api?
Here is the part of the logs directly generated from the api
{ "_index": "logging-api", "_type": "_doc", "_id": "qILeJHcB_U03uXCAT1zW", "_version": 1, "_score": null, "_source": { "@timestamp": "2021-01-21T12:14:45.0102735+00:00", "level": "Information", "messageTemplate": "Executed action {ActionName} in {ElapsedMilliseconds}ms", "message": "Executed action \"Logging.API.Controllers.LoggingController.Get (Logging.API)\" in 0.6977ms", "fields": { "ActionName": "Logging.API.Controllers.LoggingController.Get (Logging.API)", "ElapsedMilliseconds": 0.6977, "EventId": { "Id": 2, "Name": "ActionExecuted" },
-
How to read a file with millions of records and convert it into JSON files
I have been asked this question during an interview process.
I would like to know multiple approaches to solve this problem in less amount of time.
I know this is related to multi-threading and parallel processing where you process a chunk of data or something like ELK stack.
Please help me to understand the perfect fit solution.
-
Kibana Dashboard says could not locate that index-pattern
I am using
ELK - 7.10.1
Version.Through
log-stash
initially i have uploaded18-Jan-2021 and 19-Jan-2021 csv data
toElasticsearch
further i have visualized and created dashboard inKibana
. Again today i have did the below.- Deleted the index pattern
data-1
and deleted fromSaved Objects
sections as well. - I have uploaded my csv file data from
01-Jan-2021 to 19-Jan-2020
. - Again i have created new index pattern with the same name
data-1
earlier i had. - In
Discover
page i can view data from01-Jan-2021 to 19-Jan-2020
. - In
Discover
page if i change the date for example: from05-Jan-2021 to 15-Jan-2021
it saysNo results match your search criteria
Since i have data from -
01-Jan-2021 to 19-Jan-2021
inElasticsearch
but in Kibana if i change date something in betweendate/time
frame it is not showing the data. I am not sure what is causing this problem, Ideally it should show data for available time range correct?- Also when tried to view my Dashboard it says below error message.
Can someone here help me to fix this problem?
- Deleted the index pattern
-
Csv file load through logstash to elasticsearch not working
I am trying to load a
csv
file from Linux system throughlogstash(docker based)
with the belowconf
file../logstash/pipeline/logstash_csv_report.conf input { file { path => "/home/user/elk/logstash/report-file.csv" start_position => "beginning" sincedb_path => "/dev/null" } } filter { csv { separator => "," columns => ["start_time", "date", "requester", "full-name", "id", "config", "status"] } } output { elasticsearch { action => "index" hosts => "http://elasticsearch:9200" index => "project-info" } stdout {} }
I do not know the reason that why is my
csv
file not getting uploaded intoElasticsearch
. Mylogstash
dockerlogs
last few lines as follows. In mylogstash
i don't see any errors.logstash | [2021-01-18T04:12:36,076][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.1} logstash | [2021-01-18T04:12:36,213][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"} logstash | [2021-01-18T04:12:36,280][INFO ][filewatch.observingtail ][main][497c9eb0da97efa19ad20783321e7bf30eb302262f92ac565b074e3ad91ea72d] START, creating Discoverer, Watch with file and sincedb collections logstash | [2021-01-18T04:12:36,282][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]} logstash | [2021-01-18T04:12:36,474][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
My
docker-compose
file as follows.version: '3.7' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1 container_name: elasticsearch restart: unless-stopped environment: - node.name=elasticsearch - discovery.seed_hosts=elasticsearch - cluster.initial_master_nodes=elasticsearch - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms1g -Xmx1g" ulimits: memlock: soft: -1 hard: -1 ports: - '9200:9200' - '9300:9300' volumes: - './elasticsearch:/usr/share/elasticsearch/data' networks: - elk kibana: image: docker.elastic.co/kibana/kibana:7.10.1 container_name: kibana restart: unless-stopped environment: ELASTICSEARCH_URL: "http://elasticsearch:9200" ports: - '5601:5601' volumes: - './kibana:/usr/share/kibana/data' depends_on: - elasticsearch networks: - elk logstash: image: docker.elastic.co/logstash/logstash:7.10.1 container_name: logstash restart: unless-stopped environment: - 'HEAP_SIZE:1g' - 'LS_JAVA_OPTS=-Xms1g -Xmx1g' - 'ELASTICSEARCH_HOST:elasticsearch' - 'ELASTICSEARCH_PORT:9200' command: sh -c "logstash -f /usr/share/logstash/pipeline/logstash_csv_report.conf" ports: - '5044:5044' - '5000:5000/tcp' - '5000:5000/udp' - '9600:9600' volumes: - './logstash/pipeline:/usr/share/logstash/pipeline' depends_on: - elasticsearch networks: - elk networks: elk: driver: bridge
In my
./logstash/pipeline
folder i have onlylogstash_csv_report.conf
file.Same
csv
file able to upload usingKibana GUI
usingimport
option.Someone please help me to resolve this problem using
logstash
upload.Curl output.
# curl -XGET http://51.52.53.54:9600/_node/stats/?pretty { "host" : "3c08f83dfc9b", "version" : "7.10.1", "http_address" : "0.0.0.0:9600", "id" : "5f301139-33bf-4e4d-99a0-7b4d7b464675", "name" : "3c08f83dfc9b", "ephemeral_id" : "95a0101e-e54d-4f72-aa7a-dd18ccb2814e", "status" : "green", "snapshot" : false, "pipeline" : { "workers" : 64, "batch_size" : 125, "batch_delay" : 50 }, "jvm" : { "threads" : { "count" : 157, "peak_count" : 158 }, "mem" : { "heap_used_percent" : 16, "heap_committed_in_bytes" : 4151836672, "heap_max_in_bytes" : 4151836672, "heap_used_in_bytes" : 689455928, "non_heap_used_in_bytes" : 190752760, "non_heap_committed_in_bytes" : 218345472, "pools" : { "survivor" : { "peak_max_in_bytes" : 143130624, "max_in_bytes" : 143130624, "committed_in_bytes" : 143130624, "peak_used_in_bytes" : 65310304, "used_in_bytes" : 39570400 }, "old" : { "peak_max_in_bytes" : 2863333376, "max_in_bytes" : 2863333376, "committed_in_bytes" : 2863333376, "peak_used_in_bytes" : 115589344, "used_in_bytes" : 115589344 }, "young" : { "peak_max_in_bytes" : 1145372672, "max_in_bytes" : 1145372672, "committed_in_bytes" : 1145372672, "peak_used_in_bytes" : 1145372672, "used_in_bytes" : 534296184 } } }, "gc" : { "collectors" : { "old" : { "collection_count" : 3, "collection_time_in_millis" : 1492 }, "young" : { "collection_count" : 7, "collection_time_in_millis" : 303 } } }, "uptime_in_millis" : 4896504 }, "process" : { "open_file_descriptors" : 91, "peak_open_file_descriptors" : 92, "max_file_descriptors" : 1048576, "mem" : { "total_virtual_in_bytes" : 21971415040 }, "cpu" : { "total_in_millis" : 478180, "percent" : 0, "load_average" : { "1m" : 1.35, "5m" : 0.7, "15m" : 0.53 } } }, "events" : { "in" : 0, "filtered" : 0, "out" : 0, "duration_in_millis" : 0, "queue_push_duration_in_millis" : 0 }, "pipelines" : { "main" : { "events" : { "out" : 0, "duration_in_millis" : 0, "queue_push_duration_in_millis" : 0, "filtered" : 0, "in" : 0 }, "plugins" : { "inputs" : [ { "id" : "497c9eb0da97efa19ad20783321e7bf30eb302262f92ac565b074e3ad91ea72d", "events" : { "out" : 0, "queue_push_duration_in_millis" : 0 }, "name" : "file" } ], "codecs" : [ { "id" : "rubydebug_a060ea28-52ce-4186-a474-272841e0429e", "decode" : { "out" : 0, "writes_in" : 0, "duration_in_millis" : 0 }, "encode" : { "writes_in" : 0, "duration_in_millis" : 2 }, "name" : "rubydebug" }, { "id" : "plain_d2037602-bfe9-4eaf-8cc8-0a84665fa186", "decode" : { "out" : 0, "writes_in" : 0, "duration_in_millis" : 0 }, "encode" : { "writes_in" : 0, "duration_in_millis" : 0 }, "name" : "plain" }, { "id" : "plain_1c01f964-82e5-45a1-b9f9-a400bc2ac486", "decode" : { "out" : 0, "writes_in" : 0, "duration_in_millis" : 0 }, "encode" : { "writes_in" : 0, "duration_in_millis" : 0 }, "name" : "plain" } ], "filters" : [ { "id" : "3eee98d7d4b500333a2c45a729786d4d2aefb7cee7ae79b066a50a1630312b25", "events" : { "out" : 0, "duration_in_millis" : 39, "in" : 0 }, "name" : "csv" } ], "outputs" : [ { "id" : "8959d62efd3616a9763067781ec2ff67a7d8150d6773a48fc54f71478a9ef7ab", "events" : { "out" : 0, "duration_in_millis" : 0, "in" : 0 }, "name" : "elasticsearch" }, { "id" : "b457147a2293c2dee97b6ee9a5205de24159b520e86eb89be71fde7ba394a0d2", "events" : { "out" : 0, "duration_in_millis" : 22, "in" : 0 }, "name" : "stdout" } ] }, "reloads" : { "last_success_timestamp" : null, "last_error" : null, "successes" : 0, "failures" : 0, "last_failure_timestamp" : null }, "queue" : { "type" : "memory", "events_count" : 0, "queue_size_in_bytes" : 0, "max_queue_size_in_bytes" : 0 }, "hash" : "3479b7408213a7b52f36d8ad3dbd5a3174768a004119776e0244ed1971814f72", "ephemeral_id" : "ffc4d5d6-6f90-4c24-8b2a-e932d027a5f2" }, ".monitoring-logstash" : { "events" : null, "plugins" : { "inputs" : [ ], "codecs" : [ ], "filters" : [ ], "outputs" : [ ] }, "reloads" : { "last_success_timestamp" : null, "last_error" : null, "successes" : 0, "failures" : 0, "last_failure_timestamp" : null }, "queue" : null } }, "reloads" : { "successes" : 0, "failures" : 0 }, "os" : { "cgroup" : { "cpuacct" : { "usage_nanos" : 478146261497, "control_group" : "/" }, "cpu" : { "cfs_quota_micros" : -1, "stat" : { "number_of_times_throttled" : 0, "time_throttled_nanos" : 0, "number_of_elapsed_periods" : 0 }, "control_group" : "/", "cfs_period_micros" : 100000 } } }, "queue" : { "events_count" : 0 }
-
Cassandra health check in phantom
In our containerised scala application , we are using phantom library for persisting and retrieving data from
Cassandra
. We have a requirement to do regular health check on Cassandra.Presently, on bootstrap of application when there is a deployment in any new kubernetes pod, we check for an active Cassandra session and then later run a scheduled check on Cassandra health.
Appreciate if could share alternatives to do health check on Cassandra.
-
Load balancer health check vs docker health check?
I have an ECS cluster with multiple nodes (task defs) fronted by an application load balancer. Does it make sense to configure a health check at the load balancer and at the container level (within the task definition)?
The load balancer runs the configured health check against every registered target so it can unregister failing nodes. Setting the health check at the container level accomplishes the same thing: ECS will unregister any container that fails the health check (according to your configuration). ECS will always instantiate more instances of your task def to satisfy your desired count.
To me it sounds like if your task definition only has a single container, then only setting the health check at the load balancer (since it's required) is enough. Am I missing anything?
-
(bluetooth low energy) Subscribe to a characteristic when peripheral sends a "indication". Using Noble
I'm trying to read from my Blood Pressure using noble(https://github.com/abandonware/noble) BLE. The service is 1810 and the characteristic is 2a35, but the buffer is empty. The properties is: [ 'indicate' ]
I know that i have to subscribe to read the buffer, but i don't know how to do this. Someone cann help? I using this code:
const noble = require('@abandonware/noble'); noble.on('stateChange', async (state) => { if (state === 'poweredOn') { console.log('Scanning'); await noble.startScanningAsync([], false); } else { noble.stopScanning(); } }); noble.on('discover', peripheral => { // connect to the first peripheral that is scanned noble.stopScanning(); const name = peripheral.advertisement.localName; console.log(`Connecting to '${name}' ${peripheral.id}`); connectAndSetUp(peripheral); }); function connectAndSetUp(peripheral) { peripheral.connect(error => { console.log('Connected to', peripheral.id); // specify the services and characteristics to discover const serviceUUIDs = ['1810']; const characteristicUUIDs = ['2a35']; peripheral.discoverSomeServicesAndCharacteristics( serviceUUIDs, characteristicUUIDs, onServicesAndCharacteristicsDiscovered ); }); peripheral.on('disconnect', () => console.log('disconnected')); } function onServicesAndCharacteristicsDiscovered(error, services, characteristics) { console.log('Discovered services and characteristics'); const echoCharacteristic = characteristics[0]; console.log(echoCharacteristic); // console.log( echoCharacteristic.read()); // data callback receives notifications echoCharacteristic.on('data', (data, isNotification) => { console.log(`Received: "${data}"`); }); // echoCharacteristic.once('data',data => console.log("oii "+data)); // subscribe to be notified whenever the peripheral update the characteristic echoCharacteristic.subscribe( (error) => { if (error) { console.error('Error subscribing to echoCharacteristic'); } else { // console.log( echoCharacteristic.Buffer); console.log('Subscribed for echoCharacteristic notifications'); } }); }
-
Powershell and yaml files
We're trying to write a script in which we need to fill several yaml template files. The situation is as follows: We want to use the heartbeat monitoring for our applications, all for the uptime stuff. We found out that Heartbeat does not like it when there are more then 300 applications being monitored via 1 heartbeat service. That is why we have decided to split it up in several services, this unfortunately requires us to change our Powershell script. Our current script is as follows:
$appservers = $servers = $appservers $templatelocation = $ymllocation = $Directory = #Copy the template file to the correct folder try { Copy-Item -Path $templatelocation -Destination $ymllocation -Recurse -Force -ErrorAction Stop echo "Date: $((Get-Date).ToString()). Status: Template Successfully copied!" } catch { $ErrorMessage = $Error[0].Exception.Message echo "Date: $((Get-Date).ToString()). Status: Copy Failure - $ErrorMessage" } ForEach ($server in $servers) { Invoke-Command -ComputerName $server -ArgumentList $server, $ymllocation, $templatelocation, $Directory -ScriptBlock { param($server, $ymllocation, $templatelocation, $Directory) Import-module powershell-yaml Write-Host "Connecting to server: $server in order to load the websites." $Sites = Get-Website | Where-Object { $_.Name -notlike '' } Write-Host "Loading websites of server: $server succesfull." if ($server -like '*') { ForEach ($site in $sites) { $SiteName = $Site.Name $Pattern = "heartbeat.monitors:" $Filebeat = $ymllocation $FileOriginal = Get-Content $Filebeat #Import the hearbeat file as RAW yml file $yml = convertfrom-yaml $(Get-content $ymllocation -raw) #Prepare the file for the count $monitorcount = $yml.'heartbeat.monitors' [String[]] $FileModified = @() Foreach ($Line in $FileOriginal) { $FileModified += $Line if ($Line -match $pattern) { $FileModified += "" $FileModified += "- type: http" $FileModified += " id: $sitename" $FileModified += " name: $sitename" $FileModified += " urls:" $FileModified += " - https://$sitename" $FileModified += "" $FileModified += " #Configure task schedule" $FileModified += " schedule: '@every 120s'" $FileModified += " # Total test connection and data exchange timeout" $FileModified += " timeout: 30s" $fileModified += " # Name of corresponding APM service, if Elastic APM is in use for the monitored service." $FileModified += " #service_name: my-apm-service-name" $FileModified += "" } } Set-Content $Filebeat $FileModified } Write-host "Added the websites of server: $server succesfully." } } }
I've cleared out a couple of sections in order to protect the names of the environment. The above script currently generates one big file but what we would like to have is to separate it into several files consisting of 50 monitors each. The problem is now that we can get a count on the added monitors but we have no clue on how to go further then than and how to, from that point, create a new file and proceed filling the file from where it stopped (so from 51 and so on).
Does anyone have an advice perhaps?
Thank you for your time!
Kind regards.
-
Effeciently detect if heartbeat not recieved (and fire an event)
I have devices which send heartbeats every 15 seconds. I am wondering what are good approaches to being able to detect every 15 seconds.
One naive approach I can think of is, we know all our machine and their machineIds. We create a set and every time we receive a heartbeat (containing the machine Id) and we remove it from the set. After 30 seconds (nyquist) we check the set and any remaining machine Ids tell us what machines didn't send a heartbeat. Any better approaches than this?
Thanks
-
How to launch a process at boot time and restart it when it has crashed or quit unexpectedly on OS X and windows?
What is the best way to launch a background application when an at boot time and manage its lifecycle in the sense that when it crashes or quits unexpectedly, it is possible to relaunch it. My application is a bit heavy .. so I was thinking of two options :
My_Application itself as a daemon process ( I am not willing to opt this method as My_APllicationis a process with a large footprint)
A manager daemon process which will in turn spawn My_Application process as a child process.
This is my opinion. Can anyone suggest me any other efficient way or comment on above approach.
Also, how to better implement the Heartbeat mechanism to periodically check if the background process is running or not and force launch if it is not running
Thanks!