Azure Api call taking long time
I am currently working on a project in which i need to get the list of sql databases
I used this api call
https://docs.microsoft.com/en-us/rest/api/sql/databases/listbyserver
It takes three parameter namely subscription id,resource group and server-name
I used another api to call resource group
https://docs.microsoft.com/en-us/rest/api/resources/resourcegroups/list
And for server-name and subscription-id,I have created dictionary for manual insertion.
My code structure looks something like:
For loop:To loop through server name.
Calling resources group api
For loop:to loop through resources- group
Calling listbyserver api
For loop:getting the values from list by server api call
The problem i am facing here is ,it is taking almost an hour to get the result.
I have worked on another project with different api call (mentioned below) but its code structure and parameters are similar and call completes within 5 mintues.
Api call: https://docs.microsoft.com/en-us/rest/api/compute/virtualmachines/list
I want to understand the reason why it is taking so long in case of listofserver api call.
Is there any other way i can get the expected results.
Thanks in advance!!!
See also questions close to this topic
-
How do I denormalize an ER-D into reporting views for end users?
Link to ER-D: D2L ER-D Diagram for Competency
We have this data in an oracle database. It will go through IBM Framework Mangers which reflects all of the relationships in the ER-D, as well as add some security. Then it is available to our end users via Cognos, our reporting tool. I've been tasked with de-normalizing the data so that the end users see fewer reporting views/tables. For example, for this specific data set, the user currently sees all 6 competency related tables, along with 2 others (Users and Organizational Units). The goal is to make it easier for the end user by doing the joining together and instead of having 6 (or 8) tables, to have maybe 2 or 3 reporting views. I've never done this before, and assume that in creating the views, because none of them have zero cardinality (as in zero to many, one to zero or many, etc.) they are all inner joins. So first question, are these all inner joins? 2, Do I list columns that I want from each table, and then just join on the keys like this:
select a.Activityid, a.Orgunitid, a.ActivityName, etc. b.Userid, b.LeraningObjectid, etc. from ComptencyActivities a inner join CompetnecyActivityResults b on a.ActivityId = b.ActivityId and a.OrgUnitId = b.OrgUnitid
3rd question, how do I figure out how many views to create? Would creating a single reporting view be an awful idea?
Also, I've done my best googling and have found sufficient advice on how to create ER-Ds and to normalize to a certain extent, but I'm having a hard time explaining how to de-normalize the data for reporting so any resources at all would be most appreciated. Thanks so much!
-
How to fetch SQL data using api and use that data in react-native-svg charts? I am having an API that I want to use to fetch data and display
I am fetching some data using an api. Inside that api there are SQL queries that are executed. I have api that will be used to fetch data or execute these queries. I want to know how can I replace my chart's static data with dynamic data that will be fetched from api.
Here is my
TabDashboardDetail.js
where I am fetching title for all charts based on api data:import React from 'react'; import DefaultScrollView from '../components/default/DefaultScrollView'; import ChartView from '../components/default/ChartView'; import CogniAreaChart from '../components/CogniAreaChart'; import { areaChartData } from '../chartData'; const TabDashboardDetail = ({ navigation, route }) => { const tabsConfig = route.params.tabsConfig; return ( <DefaultScrollView> {tabsConfig.components.map((comp, index) => { return ( <ChartView key={index} title={comp.name}> <CogniAreaChart areaChartData={areaChartData} height={200} /> </ChartView> ); })} </DefaultScrollView> ); }; export default TabDashboardDetail;
Here is my
CogniAreaChart.js
which is chart file that is currently being rendered:/* eslint-disable react-native/no-inline-styles */ import React from 'react'; import { View } from 'react-native'; import { AreaChart, YAxis, XAxis } from 'react-native-svg-charts'; import * as shape from 'd3-shape'; const CogniAreaChart = ({ areaChartData, visibility, ...props }) => { const xAxis = areaChartData.message.map((item) => item[Object.keys(item)[0]]); const areaChartY1 = areaChartData.message.map( (item) => item[Object.keys(item)[1]], ); return ( <View style={{ height: props.height, flexDirection: 'row', }}> <YAxis data={areaChartY1} contentInset={{ marginBottom: 20 }} svg={{ fill: 'grey', fontSize: 12, }} /> <View style={{ flex: 1 }}> <AreaChart style={{ flex: 1 }} data={areaChartY1} contentInset={{ top: 20, bottom: 20 }} curve={shape.curveNatural} svg={{ fill: 'rgba(134, 65, 244, 0.8)' }} /> <XAxis style={{ height: 20 }} data={areaChartY1} formatLabel={(value, index) => xAxis[index]} contentInset={{ left: 30, right: 30 }} svg={{ fill: 'grey', fontSize: 12, rotation: 35, originY: 5, y: 15, }} /> </View> </View> ); }; export default CogniAreaChart;
Here is areachartData that is currently being used in
CogniAreaChart.js
:export const areaChartData = { message: [ { year: '2018', quantity: 241.01956823922, sales: 74834.12976954, }, { year: '2019', quantity: 288.57247706422, sales: 80022.3050176429, }, ], status: 'success', };
I have the API that I will replace with the example if anyone suggests.
-
How do I store an array in a PSQL, where it is passed as a parameter $1 to the db query
I am passing a one-dimensional array of three strings to the function, it looks like this going in:
[ '#masprofundo', '#massensual', '#eclectic' ]
The data column is declared thus:
tags TEXT []
This is my function:
const query = `INSERT INTO posts (posted_at, message, link, tags) VALUES (TO_TIMESTAMP($1, 'DD/MM/YYYY HH24:MI'), $2, $3, ARRAY [$4]) RETURNING tags;`; const params = [timestamp, message, link, tags];
Now, postgres believes I want to insert an array containing one item, which is a string of all the values in my tags array. It looks like this:
{ tags: [ '{"#masprofundo","#massensual","#eclectic"}' ] }
What I want to know is, how do I prevent this behaviour, where postGres adds an unnecessary extra layer of quotation marks and parentheses? For further clarification; this is what the row looks like in my terminal.
{"{\"#masprofundo\",\"#massensual\",\"#eclectic\"}"}
I have looked at the docs, and tried a dozen variations on ARRAY[$4]. From what I can see, the docs do not elaborate on inserting arrays as variables. Is there some destructuring that needs to happen? The arrays I pass will be of varying size, and sometimes empty.
Any help is much appreciated.
-
'Error [ERR_STREAM_CANNOT_PIPE]: Cannot pipe, not readable' facing this error when running testcafe testcases in saucelabs
What im trying to do here: Im trying to run testcafe scripts through azure pipelines in SauceLabs. Trying to run through localhost URL, as we are yet to figure out the authentication strategies. The testcases are passing when i see the video in SauceLabs build job, but end of the run testcase is failing with this error.
Unhandled promise rejection:
Error [ERR_STREAM_CANNOT_PIPE]: Cannot pipe, not readable at ServerResponse.pipe (_http_outgoing.js:821:22) at Object.respondOnWebSocket
(/Users/runner/work/1/s/apps/atom-testcafe/node_modules/testcafe/node_modules/testcafe-hammerhead/lib/request-pipeline/websocket.js:32:13) at Array.decideOnProcessingStrategy
(/Users/runner/work/1/s/apps/atom-testcafe/node_modules/testcafe/node_modules/testcafe-hammerhead/lib/request-pipeline/stages.js:75:25) at Object.run
(/Users/runner/work/1/s/apps/atom-testcafe/node_modules/testcafe/node_modules/testcafe-hammerhead/lib/request-pipeline/index.js:19:34) at process._tickCallback (internal/process/next_tick.js:68:7)
Browser: Chrome 90.0.4430.85 / Windows 10
-
Programmatically create a service SAS token for Storage Account in Azure
From the Azure portal I would like to programmatically and periodically create a service SAS token. Once a token has been created it should expire in one week and a new token also valid for one week will be created and so on. I was reading this article https://docs.microsoft.com/it-it/azure/storage/blobs/sas-service-create?tabs=dotnet but I am not very sure about where that code should run, in a Azure VM? I can't give internet access to the VM
-
Azure: How to build *resilient* resource deployment pipelines?
I am looking for some best practices and suggestions:
- My team is creating an Azure DevOps pipeline to deploy a complex infrastructure of VNets, VMs, Azure ML workspaces, SQL databases, etc.
- The pipeline uses Terraform where possible, but Powershell or AZ CLI where needed.
- The pipeline works, it is version controlled, it has proper unit tests and integration tests (or at least decent ones).
However, due to the instability of Azure resourcing sometimes the pipeline will fail because, for instance:
- SQL server provisioning fails
- AD join of VMs fails
- or other activities which are not due to bad Infra as Code, but rather the stochasticity of the task. Provisioning resources is inherently unstable, similar to networking, etc.
I am not complaining about Azure. I am just asking:
How can I adjust the IaC pipeline so that when Azure fails occur, some sort of retry can automatically be triggered?
As a concrete example, is there an Azure or Terraform equivalent to Python's
tenacity
package or Java's Spring Retry? -
API to update Scopus data
We have found that the indexed data in the Scopus database contains a lot of errors (missing articles, citations, wrong citations unlinked citations,...) and we have found only a manual way of sending requests for fixes (that is WEB form or Excell spreadsheet). I am wondering if there is also an API to either change/fix that data or an API to send the data to Scopus (to get properly indexed)? As far as I found the info there is no such API.
-
How to display an attribute of a Json in a res.send()
I am working on an API developing project, and I am having a difficulties. I want to display after a creation (POST) only one of the attributes (I would like to only display the tittle of the new book created), like this.
title: 'Lorien Legacies'
The method that I have for post is the following:
//CREATE Request Handler app.post('/api/books', (req, res)=> { const { error } = validateBook(req.body); if (error){ res.status(400).send(error.details[0].message) return; } const book = { id: books.length + 1, title: req.body.title }; books.push(book); res.send(book); });
-
I have a question about making a POST request with Django rest api with uploading multiple images
I have a question about making a POST request with Django rest api with uploading multiple images. My question : To get the images from a post Request i am using this : files_list = request.FILES.getlist('homeworkimage_set') the images are created in the database correctly. but in the response : i am not getting this : homeworkimage_set :{'image': url},{'image':url}. i don't know if i am missing something. my homeworkimage_set(image below) is empty in the response even though the images are correctly created in the database. Is it necessary to get the images urls in the response even though i the images are saved in the database? If you have any idea how can i get the images urls in the response could please help me on that. Thanks in advance.
-
Create a connection between Python / Bigquery in a VM Instance Google Cloud (Problem with Credentials File)
I'm creating a script that should run in a VM Instance (Google Cloud Plataform) and for that I use SSH (Linux) from this VM. This code will connect python with Bigquery.
I'm having problems with "credentials", when I'm running in my local machine, my credentials are in this file at my local directory and the code access this file and run the service. But when I run in SSH (VM Instance), the VM doesn't find the file, because it's not in VM directory.
What should I do to solve this problem? I think I need to put this file in VM Instance and take the new path, but I don't know how.
from google.cloud import bigquery from google.oauth2 import service_account credentials = service_account.Credentials.from_service_account_file( r'C:\Users\Path\File.json') project_id = 'Project_ID523' client = bigquery.Client(credentials= credentials, project=project_id)
-
Updating Azure Virtual Machine Scale Set
I have hosted a website in azure virtual machine scale set by following the below steps
- Create a VM and do the necessary changes/installations in iis.
- Create a snapshot of the VM. This ensure that the above instance can be used for future changes.
- create a disk from the snapshot.
- create a vm from the disk.
- RDP to the instance and generalize the instance for deployment (sysprep)
Run %WINDIR%\system32\sysprep\sysprep.exe as admin. Enter System Out-of-Box Experience (OOBE), Generalize check box is selected Shutdown Option = Shutdown - Create Image (capture) from the above instance.
- Create VSS from the above image
Suppose their is a change in the web build , Is there a way to update the scale set without following these steps again ?
-
Performing a bytecode compiler?
I am attempting to create a simple scripting language in Rust. So now i perform the execution of scripting in form of: Breaking code to tokens -> Arranging tokens as AST -> Performing runtime in vm
Now in many blogs and guides they say to use bytecodes by adding one more process to it as Breaking code to tokens -> Arranging tokens as AST -> Compile to bytecode -> Performing runtime in vm
Lets assume the code somewhat looks like this:
func someFunction() { print("hello"); }
And it has been lexed and breaked down to ast containing the position and the value for error debugging
Statement(Func("someFunction", [], [Call(Word("print"), [String("hello")])]), Position(1, 1))
Now when i convert that ast to a byte code with 0x69 is op code function start, 0x70 is block start, 0x71 is block end, 0x72 is function call, 0x73 is function end, 0x01 is end
0x69 0x70 0x72 0x00 0x71 0x73 0x01 // This is not the actual byte code but just a representation
Now in this kind of bytecode, how can i store the statement position while error debugging?
My actual questions?
- Why need to use bytecode?
- Can i directly pass ast to my vm?
- Is bytecode worthful if yes then how i can store the statement position while error debugging.