Tool for Azure cognitive search similar to Logstash?
My company has lots of data(Database: PostgreSQL) and now the requirement is to add search feature in that,we have been asked to use Azure cognitive search. I want to know that how we can transform the data and send it to the Azure search engine. There are few cases which we have to handle:
- How will we transfer and upload on index of search engine for existing data?
- What will be the easy way to update the data on search engine with new records in our Production Database?(For now we are using Java back end code for transforming the data and updating the index, but it is very time consuming.) 3.What will be the best way to manage when there's an update on existing database structure? How will we update the indexer without doing lots of work by creating the indexers every time?
Is there anyway we can automatically update the index whenever there is change in database records.
2 answers
-
answered 2022-04-22 16:19
Austin
You can either write code to push data from your PostgreSQL database into the Azure Search index via the /docs/index API, or you can configure an Azure Search Indexer to do the data ingestion. The upside of configuring an Indexer to do the ingestion is that you can also configure it to monitor the datasource on a schedule for updates, and have those updates reflected into the search index automatically. For example via SQL Integrated Change Tracking Policy
PostgreSQL is a supported datasource for Azure Search Indexers, although the datasource is in preview (not get generally available).
-
answered 2022-04-22 16:32
Gia Mondragon - MSFT
Besides the answer above that involves coding on your end, there is a solution you may implement using Azure Data Factory PostgreSQL connector with a custom query that tracks for recent records and create a Pipeline Activity that sinks to an Azure Blob Storage account. Then within Data Factory you can link to a Pipeline Activity that copies to an Azure Cognitive Search index and add a trigger to the pipeline to run at specified times.
Once the staged data is in the storage account in delimitedText format, you can also use built-in Azure Blob indexer with change tracking enabled.
do you know?
how many words do you know
See also questions close to this topic
-
ERROR: invalid byte sequence for encoding WITH psql
I've seen numerous issues in other posts with the copy command and:
ERROR: invalid byte sequence for encoding "UTF8": 0xfc
And the consensus in these posts appears to be to specify the encoding in the command you're doing the copy with. I have done so:
psql -h localhost -p 5432 -d BOBDB -U BOB -c "\COPY \"BOBTB01\" FROM 'C:\Temp\file.csv' with csv HEADER ENCODING 'WIN1252'"; Password for user BOB: **ERROR: character with byte sequence 0x81 in encoding "WIN1252" has no equivalent in encoding "UTF8" CONTEXT: COPY BOBTB01, line 76589**
So, that confused me and I changed to UTF8 to WIN1252 and having done so I get a slightly different error, the failure is on a different line and the text is slightly different.
psql -h localhost -p 5432 -d BOBDB -U BOB -c "\COPY \"BOBTB01\" FROM 'C:\Temp\file.csv' with csv HEADER ENCODING 'UTF8'"; Password for user BOB: **ERROR: invalid byte sequence for encoding "UTF8": 0xfc CONTEXT: COPY BOBTB01, line 163**
This is the encoding shown in the database:
show client_encoding; client_encoding ----------------- UTF8 (1 row)
The file is from a reliable source and I happen to have "R" installed which also does .csv import. The file was pulled into "R" without issue, that's making me think it's not the file but something else. Is there another switch or syntax that can bypass these issues perhaps?
I'm not sure what is wrong.
Can you help?
Thanks.
-
Whats missing on my Ruby 'Inverse Of' relationship
I know this topic has been addressed, but I have been at this for 2 days and I'm just stuck. I know inverse of does not create a new query, so should I use another method?
Question: How to set up an 'inverse of' with a has_one, belongs_to situation & same class..
Explanation: A user 'has_one :spouse' and 'belongs_to :spouse_from'. They are inverse of each other. When a User signs up, they can invite their significant other. For Example
- user_a invites & creates user_b
- user_b.spouse_id is set to user_a.id
- In a separate method I want to be able to update like.. user_a.spouse_id = user_a.spouse.id
The only association that works at this point is user_b.spouse.
Class User has_one :spouse, class_name: 'User', foreign_key: :spouse_id, dependent: :nullify, inverse_of: :spouse_from belongs_to :spouse_from, class_name: 'User', foreign_key: :spouse_id, inverse_of: :spouse, optional: true
-
Normalizing data in postgresql
Flag This application will read an iTunes library in comma-separated-values (CSV) and produce properly normalized tables as specified below. Once you have placed the proper data in the tables, press the button below to check your answer.
We will do some things differently in this assignment. We will not use a separate "raw" table, we will just use ALTER TABLE statements to remove columns after we don't need them (i.e. we converted them into foreign keys).
We will use the same CSV track data as in prior exercises - this time we will build a many-to-many relationship using a junction/through/join table between tracks and artists.
To grade this assignment, the program will run a query like this on your database and look for the data it expects to see:
SELECT track.title, album.title, artist.name FROM track JOIN album ON track.album_id = album.id JOIN tracktoartist ON track.id = tracktoartist.track_id JOIN artist ON tracktoartist.artist_id = artist.id ORDER BY track.title LIMIT 3;
Expected out put is this
The expected result of this query on your database is: title album artist A Boy Named Sue (live) The Legend Of Johnny Cash Jo
DROP TABLE album CASCADE; CREATE TABLE album ( id SERIAL, title VARCHAR(128) UNIQUE, PRIMARY KEY(id) ); DROP TABLE track CASCADE; CREATE TABLE track ( id SERIAL, title TEXT, artist TEXT, album TEXT, album_id INTEGER REFERENCES album(id) ON DELETE CASCADE, count INTEGER, rating INTEGER, len INTEGER, PRIMARY KEY(id) ); DROP TABLE artist CASCADE; CREATE TABLE artist ( id SERIAL, name VARCHAR(128) UNIQUE, PRIMARY KEY(id) ); DROP TABLE tracktoartist CASCADE; CREATE TABLE tracktoartist ( id SERIAL, track VARCHAR(128), track_id INTEGER REFERENCES track(id) ON DELETE CASCADE, artist VARCHAR(128), artist_id INTEGER REFERENCES artist(id) ON DELETE CASCADE, PRIMARY KEY(id) ); \copy track(title,artist,album,count,rating,len) FROM 'library.csv' WITH DELIMITER ',' CSV; INSERT INTO album (title) SELECT DISTINCT album FROM track; UPDATE track SET album_id = (SELECT album.id FROM album WHERE album.title = track.album); INSERT INTO tracktoartist (track, artist) SELECT DISTINCT ... INSERT INTO artist (name) ... UPDATE tracktoartist SET track_id = ... UPDATE tracktoartist SET artist_id = ... -- We are now done with these text fields ALTER TABLE track DROP COLUMN album; ALTER TABLE track ... ALTER TABLE tracktoartist DROP COLUMN track; ALTER TABLE tracktoartist ... SELECT track.title, album.title, artist.name FROM track JOIN album ON track.album_id = album.id JOIN tracktoartist ON track.id = tracktoartist.track_id JOIN artist ON tracktoartist.artist_id = artist.id LIMIT 3;
What am i doing wrong with the code?
-
Deploy VueJS + API app to Azure Static Web App with Gitlab doesn't create functions
I've started creating a small application that will use VueJS as a frontend with Azure Functions as the backend. I was looking at using Azure Static Web Apps to host both components for the application and Gitlab to store / deploy the app.
Everything but the creation of the Azure functions works. Following https://docs.microsoft.com/en-us/azure/static-web-apps/gitlab?tabs=vue
The output from the deploy step, listed below is:
App Directory Location: '/builds/*/valhalla/valhalla-client/dist/spa' was found. Api Directory Location: '/builds/*/valhalla/valhalla-api/dist' was found. Looking for event info Could not get event info. Proceeding Starting to build app with Oryx Azure Static Web Apps utilizes Oryx to build both static applications and Azure Functions. You can find more details on Oryx here: https://github.com/microsoft/Oryx ---Oryx build logs--- Operation performed by Microsoft Oryx, https://github.com/Microsoft/Oryx You can report issues at https://github.com/Microsoft/Oryx/issues Oryx Version: 0.2.20220131.3, Commit: ec344c058843461525ff03b46031553b6e15a47a, ReleaseTagName: 20220131.3 Build Operation ID: |qAffRWArEg8=.deee9498_ Repository Commit : 7cdd5b61f956e6cb8459b13a42af363c4440a97b Detecting platforms... Could not detect any platform in the source directory. Error: Could not detect the language from repo. ---End of Oryx build logs--- Oryx was unable to determine the build steps. Continuing assuming the assets in this folder are already built. If this is an unexpected behavior please contact support. Finished building app with Oryx Starting to build function app with Oryx ---Oryx build logs--- Operation performed by Microsoft Oryx, https://github.com/Microsoft/Oryx You can report issues at https://github.com/Microsoft/Oryx/issues Oryx Version: 0.2.20220131.3, Commit: ec344c058843461525ff03b46031553b6e15a47a, ReleaseTagName: 20220131.3 Build Operation ID: |NGXLP5bVBRk=.705477f6_ Repository Commit : 7cdd5b61f956e6cb8459b13a42af363c4440a97b Detecting platforms... Could not detect any platform in the source directory. Error: Could not detect the language from repo. ---End of Oryx build logs--- Oryx was unable to determine the build steps. Continuing assuming the assets in this folder are already built. If this is an unexpected behavior please contact support. [WARNING] The function language could not be detected. The language will be defaulted to node. Function Runtime Information. OS: linux, Functions Runtime: ~3, node version: 12 Finished building function app with Oryx Zipping Api Artifacts Done Zipping Api Artifacts Zipping App Artifacts Done Zipping App Artifacts Uploading build artifacts. Finished Upload. Polling on deployment. Status: InProgress. Time: 0.1762737(s) Status: InProgress. Time: 15.3950401(s) Status: Succeeded. Time: 30.5043965(s) Deployment Complete :) Visit your site at: https://polite-pebble-0dc00000f.1.azurestaticapps.net Thanks for using Azure Static Web Apps! Exiting Cleaning up project directory and file based variables 00:00 Job succeeded
The deploy step appears to have succeeded, and the frontend is deployed, but there are no Azure Functions showing up in this Static Web App. Is something missed here? So far, the Azure Functions I have are the boiler-plate from instantiating a new Azure Function folder.
image: node:latest variables: API_TOKEN: $DEPLOYMENT_TOKEN APP_PATH: '$CI_PROJECT_DIR/valhalla-client/dist/spa' API_PATH: '$CI_PROJECT_DIR/valhalla-api/dist' stages: - install_api - build_api - install_client - build_client - deploy install_api: stage: install_api script: - cd valhalla-api - npm ci artifacts: paths: - valhalla-api/node_modules/ cache: key: node paths: - valhalla-api/node_modules/ only: - master install_client: stage: install_client script: - cd valhalla-client - npm ci artifacts: paths: - valhalla-client/node_modules/ cache: key: node paths: - valhalla-client/node_modules/ only: - master build_api: stage: build_api dependencies: - install_api script: - cd valhalla-api - npm install -g azure-functions-core-tools@3 --unsafe-perm true - npm run build artifacts: paths: - valhalla-api/dist cache: key: build_api paths: - valhalla-api/dist only: - master needs: - job: install_api artifacts: true optional: true build_client: stage: build_client dependencies: - install_client script: - cd valhalla-client - npm i -g @quasar/cli - quasar build artifacts: paths: - valhalla-client/dist/spa cache: key: build_client paths: - valhalla-client/dist/spa only: - master needs: - job: install_client artifacts: true optional: true deploy: stage: deploy dependencies: - build_api - build_client image: registry.gitlab.com/static-web-apps/azure-static-web-apps-deploy script: - echo "App deployed successfully." only: - master
-
Azure Synapse Notebooks Vs Azure Databricks notebooks
I was going through the features of Azure Synapse Notebooks Vs Azure Databricks notebooks.
- Are there any major differences between these apart from the component they belong to ?
- Are there any scenarios where one is more appropriate over other?
-
How to authorize azure container registry requests from .NET CORE C#
I have a web application which creates ContainerInstances, I have specific container registry images I want to use. As a result, I use this code to get my azure container registry
IAzure azure = Azure.Authenticate($"{applicationDirectory}/Resources/my.azureauth").WithDefaultSubscription(); IRegistry azureRegistry = azure.ContainerRegistries.GetByResourceGroup("testResourceGroup", "testContainerRegistryName");
I get this error when the second line of code is hit
The client 'bc8fd78c-2b1b-4596-827e-6a3c918b7c17' with object id 'bc8fd78c-2b1b-4596-827e-6a3c918b7c17' does not have authorization to perform action 'Microsoft.ContainerRegistry/registries/read' over scope '/subscriptions/506b787d-83ef-426a-b7b8-7bfcdd475855/resourceGroups/testapp-live/providers/Microsoft.ContainerRegistry/registries/testapp' or the scope is invalid. If access was recently granted, please refresh your credentials.
I literally have no idea what to do about this. I have seen so many articles talking about Azure AD and giving user roles and stuff. Can someone please walk me step by step how to fix this? I REALLY appreciate the help. Thanks.
I cannot find any client under that object ID so perfectly fine starting from scratch again with a better understanding of what I am doing.
-
Logstash Conf | Extracting Filename from Path
I am trying to setup Logstash to feed Elasticsearch. In course, I've created the following conf file that seem to work nicely:
input { beats { port => 5044 } file { path => "C:/f1/f2/Logs/f3/LocalHost#base#iway_2022-03-28T10_45_15.log" } } filter { grok { match => { "message" => [ ".%{TIMESTAMP_ISO8601:timeStamp}. %{LOGLEVEL:loglevel} .(W.)%{DATA:thread}.%{INT:thread_pool}. %{GREEDYDATA:msgbody}", ".%{TIMESTAMP_ISO8601:timeStamp}. %{LOGLEVEL:loglevel} .%{DATA:thread}. %{GREEDYDATA:msgbody}" ] } } } output { elasticsearch { hosts => ["https://localhost:9200"] index => "iway_logs" user => "elastic" password => "something" cacert => "C:\f1\f2\logstash-8.1.3\config\cert\elasticsearch_http_ca.crt" } }
I have been trying to add two new fields but unsuccessful so far. Following is the current version of the conf file after several revises.
input { beats { port => 5044 } file { path => "C:/f1/f2/Logs/f3/LocalHost#base#iway_2022-03-28T10_45_15.log" } } filter { grok { match => { "message" => [ ".%{TIMESTAMP_ISO8601:timeStamp}. %{LOGLEVEL:loglevel} .(W.)%{DATA:thread}.%{INT:thread_pool}. %{GREEDYDATA:msgbody}", ".%{TIMESTAMP_ISO8601:timeStamp}. %{LOGLEVEL:loglevel} .%{DATA:thread}. %{GREEDYDATA:msgbody}" ] } } grok { match => { "path" => "%{GREEDYDATA}/%{GREEDYDATA:filename}\.log" } } mutate { split => { "filename" => "#" } add_field => { "serverName" => "%{[filename][0]}" } add_field => { "configName" => "%{[filename][1]}" } } } output { elasticsearch { hosts => ["https://localhost:9200"] index => "iway_logs" user => "elastic" password => "something" cacert => "C:\f1\f2\logstash-8.1.3\config\cert\elasticsearch_http_ca.crt" } }
The result of new fields namely, serverName and configName, always reports the raw expression as opposed to an evaluated output. Could someone help? TIA.
-
Logstash (EACCESS) Permission Denied
My logstash instance has stopped working with a Permission denied error. I'm running on Windows. I've been using version 7.8.1 and have also tried with 7.16.2. Both return the same error. I'm running as an administrator. Same error in Windows cmd and git bash shells. Same error when I try to run the command on different logstash configuration files.
My command:
logstash -tf logstash-sample.conf
Error message and stack trace for v7.16.2:
Using JAVA_HOME defined java: C:\Program Files\Java\jdk1.8.0_331 WARNING: Using JAVA_HOME while Logstash distribution comes with a bundled JDK. DEPRECATION: The use of JAVA_HOME is now deprecated and will be removed starting from 8.0. Please configure LS_JAVA_HOME instead. [FATAL] 2022-05-02 18:08:05.960 [main] Logstash - Logstash stopped processing because of an error: (EACCES) Permission denied - NUL org.jruby.exceptions.SystemCallError: (EACCES) Permission denied - NUL at org.jruby.RubyIO.sysopen(org/jruby/RubyIO.java:1237) ~[jruby-complete-9.2.20.1.jar:?] at org.jruby.RubyFile.initialize(org/jruby/RubyFile.java:365) ~[jruby-complete-9.2.20.1.jar:?] at org.jruby.RubyIO.open(org/jruby/RubyIO.java:1156) ~[jruby-complete-9.2.20.1.jar:?] at uri_3a_classloader_3a_.META_minus_INF.jruby_dot_home.lib.ruby.stdlib.rubygems.user_interaction.initialize(uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/user_interaction.rb:645) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.ui.rg_proxy.initialize(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler/ui/rg_proxy.rb:11) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.ui=(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler.rb:90) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.ui(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler.rb:86) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.rubygems_integration.validate(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler/rubygems_integration.rb:72) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.source.path.validate_spec(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler/source/path.rb:168) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.source.path.load_spec_files(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler/source/path.rb:182) ~[?:?] at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1821) ~[jruby-complete-9.2.20.1.jar:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.source.path.load_spec_files(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler/source/path.rb:176) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.source.path.local_specs(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler/source/path.rb:107) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.source.path.specs(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler/source/path.rb:115) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.definition.specs_for_source_changed?(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler/definition.rb:557) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.definition.specs_changed?(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler/definition.rb:542) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.definition.converge_paths(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler/definition.rb:586) ~[?:?] at org.jruby.RubyArray.any?(org/jruby/RubyArray.java:4553) ~[jruby-complete-9.2.20.1.jar:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.definition.converge_paths(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler/definition.rb:585) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.definition.initialize(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler/definition.rb:128) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.dsl.to_definition(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler/dsl.rb:221) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.dsl.evaluate(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler/dsl.rb:13) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.definition.build(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler/definition.rb:33) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.definition(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler.rb:196) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_33.lib.bundler.setup(C:/apps/logstash/logstash-7.16.2/vendor/bundle/jruby/2.5.0/gems/bundler-2.2.33/lib/bundler.rb:144) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.lib.bootstrap.bundler.setup!(C:/apps/logstash/logstash-7.16.2/lib/bootstrap/bundler.rb:79) ~[?:?] at C_3a_.apps.logstash.logstash_minus_7_dot_16_dot_2.lib.bootstrap.environment.<main>(C:\apps\logstash\logstash-7.16.2\lib\bootstrap\environment.rb:89) ~[?:?]
I'm not sure what to look at to fix the permission issue. Any ideas on the underlying cause?
-
How to detect logstash input connection error
How can I monitor and detect errors when connecting kafka to logstash.
Say for example my kafka broker is down and no connection is established between kafka and logstash.
Is there a way in to get the monitor the connection status between logstash and kafka? I can query logstash logs (but I don't think it is the appropriate way) and I tried to use logstash monitoring API (for example localhost:9600/_node/stats/pipelines?pretty) but no api gives me the connection status is off
Thank you in advance
-
How to translate and update Azure Cognitive Search Index document for different Language Analyzer fields?
I am working on configuration of Azure Cognitive Search Index which will be queried from websites in different languages. I have created language specific fields and have added appropriate language analyzers while Index creation. For example:
{ "id": "", "Description": "some_value", "Description_es": null, "Description_fr": null, "Region": [ "some_value", "some_value" ], "SpecificationData": [ { "name": "some_key1", "value": "some_value1", "name_es": null, "value_es": null, "name_fr": null, "value_fr": null }, { "name": "some_key2", "value": "some_value2", "name_pt": null, "value_pt": null, "name_fr": null, "value_fr": null } ] }
The fields Description, SpecificationData.name and SpecificationData.value are in English and coming from Cosmos DB. Fields Description_es, SpecificationData.name_es and SpecificationData.value_es will be queried from the Spanish website and should be fields translated in Spanish. And similar for the French language fields. But since, Cosmos DB is having fields only in English, language specific fields such as Description_es, SpecificationData.name_es and SpecificationData.value_es are Null by default. I have tried using Skillsets and linking Index to "Azure Cognitive Translate Service" but Skillsets are translating only one field at a time. Is there any way to translate multiple fields and save the specific translation in particular fields?
-
Migrate data from azure mongodb to azure search
I'm using Microsodt azure cloud provider in my project,where i have a mongodb installed on a VM on azure and also I have azure cognitive search instance . what I want to do is to migrate the data which i have on mongodb to azure search in order to create indexes and then use the restful apis on the client application. my question is, is there a way to move data from mongodb to azure search please ?