I have problem when I config a distributed always on availability group
I have 4 virtual machines with SQL Server 2017
Two machines IP are x.x.190.5
and x.x.190.6
; these machine have first always on availability group (AVG01)
And the other machines IP x.x.189.1
and x.x.189.2
; these machine have second always on availability group (AVG02)
Now I want to config distribute always on between these always on
All configuration pass successful , but at the end in distributed always on the second availability group have a red multiple icon and DAVG have not work!!!!
And always on dashboard have object reference error when i try to check it.
1 answer
-
answered 2020-08-02 05:10
Mahdi Rahimi
I found my problem
we configure listeners on port 1433 and endpoints on port 5022
when we config distribute always on we write listener address with port 1433 but it is wrong we must be write listener address + endpoint port
See also questions close to this topic
-
Returning values from a stored procedure ASP Classic
I am having a problem to output the results from a stored procedure using SQL Server and ASP Classic. If I have a simple
SELECT
into the procedure, it works fine. But with the code shown here, I get an error.I have this stored procedure in SQL Server:
ALTER PROCEDURE [dbo].[Sp_Teste] @data varchar(8) AS BEGIN --DROP TABLE IF EXISTS #TempSubs DECLARE @TempSubs TABLE ( PedidoID Int, NumeroPedido Varchar(20), SubstituidoPor Varchar(8000) ) INSERT INTO @TempSubs (PedidoID, NumeroPedido, SubstituidoPor) SELECT P.ID, P.NumeroPedido, STRING_AGG(CAST(IPA.Quantidade AS varchar(5)) + 'X ' + Pd.Nome, ', ') + ' por ' + STRING_AGG(CAST(IPA.Quantidade AS varchar(5)) + 'X ' + Pd2.Nome, ', ') AS SubstituidoPor FROM Pedidos P, Clientes C, Produtos Pd, ItensPedidosAjustado IPA, Produtos Pd2 WHERE P.ID = IPA.PedidoId AND P.ClienteId = C.ID AND Pd.ID = IPA.ProdutoId AND Faltante = 1 AND CONVERT(Date, P.DataPedido, 103) = CONVERT(Date, @data, 103) AND (IPA.ProdutoSubstituidoId <> 0) AND Pd2.ID = IPA.ProdutoSubstituidoId AND ((P.StatusPedido <> 'Pause' AND P.StatusPedido <> 'PULOU ENTREGA' AND P.StatusPedido <> 'Pedido Cancelado') OR P.StatusPedido IS NULL) GROUP BY P.ID, P.NumeroPedido, IPA.ProdutoSubstituidoId SELECT (SELECT STRING_AGG(Indisponibilidade, ', ') FROM @TempIndis A WHERE A.PedidoID = P.ID) AS Indisponibilidade, (SELECT STRING_AGG(SubstituidoPor, ', ') FROM @TempSubs A WHERE A.PedidoID = P.ID) AS Substituicao FROM Pedidos P, Clientes C, ItensPedidosAjustado IPA WHERE P.ID = IPA.PedidoId AND P.ClienteId = C.ID AND Faltante = 1 AND CONVERT(Date, P.DataPedido, 103) = CONVERT(Date, @data, 103) AND ((P.StatusPedido <> 'Pause' AND P.StatusPedido <> 'PULOU ENTREGA' AND P.StatusPedido <> 'Pedido Cancelado') OR P.StatusPedido IS NULL) AND P.PedidoCancelado = 0 GROUP BY P.ID, P.NumeroPedido, C.Nome, C.Email, P.TipoAssinatura ORDER BY numeropedido END
and this code in ASP Classic
db_conn = "Provider=SQLNCLI11;Server=xxxx;Database=BaseGaia;Uid=sqlserver;Pwd=xxxxx;" set conn = server.createobject("adodb.connection") set Cmd = Server.CreateObject("ADODB.Command") '------------------------------------------------------- conn.open (db_conn) '------------------------------------------------------- set rs = Server.CreateObject("ADODB.RecordSet") sSQL = "EXEC Sp_Teste @data = '20210301'" set rs = conn.execute (sSQL) response.write rs.eof
I get this error:
ADODB.Recordset error '800a0e78'
Operation is not allowed when the object is closed.
/Atendimento/testestoreprocedure.asp, line 18 -
How do I restrict my SQL aggregation Rollups to a specific grouping?
The following SQL returns 5 grouped columns and two aggregated columns:
select ten.TenancyName, svd.EmployeeId, usr.DisplayName, tjt.JobTitle, tjc.Name as JobCategory, count(svd.ratingpoints) as NumReviews, avg(svd.ratingpoints) as Rating from surveydetails svd join AbpUsers usr on usr.Id = svd.EmployeeId join AbpTenants ten on ten.Id = usr.TenantId join TenantJobTitle tjt on tjt.TenantId = usr.TenantId and tjt.Id = usr.JobTitleId join TenantJobTitleCategories tjc on tjc.Id = tjt.JobTitleCategory where svd.employeeid is not null and svd.CreationTime > '2020-01-01' and svd.CreationTime < '2021-12-31' group by ten.TenancyName, rollup(svd.EmployeeId, usr.DisplayName, tjt.JobTitle, tjc.Name) order by ten.TenancyName, svd.EmployeeId, usr.DisplayName, tjt.JobTitle, tjc.[Name]
I do want the rollup to the level of TenancyName, but it don't need all the other intermediate rollup lines. In fact, you can see that rolling up from the Doctor's (Employee's) row up to the EmployeeId produces the exact same value on every row because these are one-to-one data attributes. The only level where it makes sense to roll up to is the TenancyName level because there are multiple Doctors within each Tenant.
After the fact, I can eliminate the unwanted rows either using a HAVING clause or by making this a sub-select to an outer select which will filter out the undesired rows. For example:
select ten.TenancyName, svd.EmployeeId, usr.DisplayName, tjt.JobTitle, tjc.Name as JobCategory, count(svd.ratingpoints) as NumReviews, avg(svd.ratingpoints) as Rating from surveydetails svd join AbpUsers usr on usr.Id = svd.EmployeeId join AbpTenants ten on ten.Id = usr.TenantId join TenantJobTitle tjt on tjt.TenantId = usr.TenantId and tjt.Id = usr.JobTitleId join TenantJobTitleCategories tjc on tjc.Id = tjt.JobTitleCategory where svd.employeeid is not null and svd.CreationTime > '2020-01-01' and svd.CreationTime < '2021-12-31' group by ten.TenancyName, rollup(svd.EmployeeId, usr.DisplayName, tjt.JobTitle, tjc.Name) having (svd.EmployeeId is null and usr.DisplayName is null and tjt.JobTitle is null and tjc.Name is null) or (ten.TenancyName is not null and svd.EmployeeId is not null and usr.DisplayName is not null and tjt.JobTitle is not null and tjc.Name is not null) order by ten.TenancyName, svd.EmployeeId, usr.DisplayName, tjt.JobTitle, tjc.[Name]
This delivers what I want, but if this can be done naturally via the group by / rollup construct I should think that would be preferable from both simplicity and performance standpoints.
-
Escaping JSON special characters with JSON_QUERY not working
A project I'm working on involves storing a string of data in a table column. The table will have other columns relevant to the records. We decided to store the string data column using JSON.
From the table, a view will parse the JSON column into separate columns. The view will also have columns derived from the other main table columns. The data from the view is then used to populate parts of a document through SSRS.
When loading data into the main table, I need to utilize separate tables for deriving the other column values and the JSON column. I decided to use common table expressions for this. At the end of the query, I bring together the derived columns from the different common table expressions, including the JSON column, and insert them into the main table.
I had it almost done until I realized that when I use FOR JSON to create the JSON column, it escapes special characters. I did some research and have been trying to use the JSON_QUERY function to get around this but it's not working. Here is a simplification of the problem:
WITH Table1 ( First_Name_JSON ) As ( SELECT 'Tim/' As First_Name FOR JSON PATH ) SELECT JSON_QUERY(Table1.First_Name_JSON) as first_name FROM Table1 FOR JSON PATH
Here is the output:
[{"first_name":[{"First_Name":"Tim\/"}]}]
Why is it still escaping? The documentation shows that passing a column that was created by a FOR JSON should make the JSON_QUERY function return it without escaped characters.
I know that this works:
SELECT JSON_QUERY('{"Firt_Name": "Tim/"}') as first_name FOR JSON PATH
Output:
[{"first_name":{"Firt_Name": "Tim/"}}]
However, I need to be able to pass a column that's holding JSON data already because it's pretty long logic with many columns. Using FOR JSON is ideal for making changes versus hard coding the JSON format around each column.
I must be missing something. Thanks for any help.
-
How to have highly available Moodle in Kubernetes?
Want to set up highly available Moodle in K8s (on-prem). I'm using Bitnami Moodle with helm charts.
After a successful Moodle installation, it becomes work. But when a K8s node down, Moodle web page displays/reverts/redirects to the Moodle installation web page. It's like a loop.
Persistence storage is rook-ceph. Moodle PVC is ReadriteMany where Mysql is ReadWriteOnce.
The following command was used to deploy Moodle.
helm install moodle --set global.storageClass=rook-cephfs,replicaCount=3,persistence.accessMode=ReadWriteMany,allowEmptyPassword=false,moodlePassword=Moodle123,mariadb.architecture=replication bitnami/moodle
Any help on this is appreciated.
Thanks.
-
High-Availability not working in Hadoop cluster
I am trying to move my non-HA namenode to HA. After setting up all the configurations for JournalNode by following the Apache Hadoop documentation, I was able to bring the namenodes up. However, the namenodes are crashing immediately and throwing the follwing error.
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: There appears to be a gap in the edit log. We expected txid 43891997, but got txid 45321534.
I tried to recover the edit logs, initialize the shared edits etc., but nothing works. I am not sure how to fix this problem without formatting namenode since I do not want to loose any data.
Any help is greatly appreciated. Thanking in advance.
-
Apache Kafka Consume from Slave/ISR node
I understand the concept of master/slave and data replication in Kafka, but i don't understand why consumers and producers will always be routed to a master node when writing/reading from a partition instead of being able to read from any ISR (in-sync replica)/slave.
The way i think about it, if all consumers are redirected to one single master node, then more hardware is required to handle read/write operations from large consumer groups/producers.
Is it possible to read and write in slave nodes or the consumers/producers will always reach out to the master node of that partition?
-
Need information on Azure Storage - synchronous data replication
I am creating a disaster recovery plan for an Azure based application. In this application Azure Storage (BLOB, Gen purpose V2) has been used. We are using REST api to insert data inside the BLOB container. We are using GRS for redundancy. As per Azure documentation, the data is at first copied in the same region in 3 different availability zones synchronously. So my question is, when i upload a blob in azure storage using AZURE SDK or Rest api call and receive a success (200-OK) message, is it that the synchronous copy to all 3 availability zones in the same region is completed or only the copy to the first zone is completed and the remaining two are queued.
-
Is there a disaster recovery strategy plan if someone deletes whole azure devops organization?
What is disaster recovery options provided by Microsoft if we delete some resources from azure DevOps like Build/release pipeline , Repos or even the whole organization.?
Please specify some best practices?
-
Azure storage account fail over
Following this link for Azure storage fail over process all this link says about manual way of initiating the fail over process.
Is there way to do this failover process programmatically? without any manual intervention.
What is the clue or exception to trigger the fail over process?
Will Azure storage SDK raise any particular exception, in case of storage account unavailability?
How to replicate/simulate storage account unavailability to do development & testing?
-
Firewall IP Address to open SQL Server always-on listener
Please on 2 node cluster with AG and Listener configured, what IP address should I open in firewall to let application access Listener.
- Node1 sql A -- IP address A
- Node2 Sql B --> Ip address B
- Listener C --> IP address C
I believe if app is connecting thru Listener, I should only open IP address C for listener.
Or should I open all 3 IP addresses in the firewall?
-
Clarifying the need for MultiSubnetFailover in a SQL Server connection string
I have a ASP.NET Core 3.1 Web API that is using both Entity Framework (for Read Write object graphs) and Dapper for readonly dynamic result sets.
I faced a problem where opening a connection to an AlwaysOn Availability Group with Dapper took up to 20s to succeed, while it was instantaneous with Entity Framework. I may have network issues by the way but this is another problem.
It took me some time to discover that I was in fact not using the same database provider:
- Microsoft.EntityFrameworkCore.SqlServer for Entity Framework.
- System.Data.SqlClient for Dapper.
It took me more time to discover that I needed to use MultiSubnetFailover=true in my connection string to solve my problem with Dapper.
I'm wondering what it simply works with EF (default Value to true?), while it is required in the connection string for SqlClient?
Any help appreciated.
-
AlwayON automatic failover not working What if I stop SQL Service on Primary
To test the failover functionality on SQL Server 2016 I tested below scenarios.
- Reboot Primary node - Successful
- Reboot Secondary node which becomes primary after 1st test. - Successful
- Stop SQL Service on primary node. - The secondary node stuck in Resolving mode
Does anyone know why it's stuck in resolving mode?
-
How to security access onprem database from Azure AppService
Is there a way to securely access a on-prem Sql Server, from an AppService?
The IT guys are nervious about letting an App Service which needs access to our on-premise database.
I am not a networking guy, and am trying to come up with a solution.
The only thing I have thought of is creating a new database (CDS_API). The AppService is then given a connection string to this database. This database would then have access to the primary database (CDS).
If the AppService has only execute permissions to CDS_API, this seems secure to me. Am I missing something?
Is there a better way to do this?
-
Which Microsoft SQL server mode should I choose?
I am installing a fresh copy so MS SQL Server 2019 on my machine. During the Analysis Service Configuration step, I was given the 3 options of the Server Mode to choose from. As a Microsoft full-stack developer (ASP.NET, C#.NET, SQL SERVER), I wonder which one would be the best choice here out of them and what's their significance.
- Multidimensional and Data Mining Mode
- Tabular Mode
- PowerPivot Mode
Here's the screenshot