Get disconnected pairs of nodes in the network graph?
This is my dataset:
4095 546
3213 2059
4897 2661
...
3586 2583
3437 3317
3364 1216
Each line is a pair of nodes which have an edge between them. The whole dataset build an graph. But I want to get many node pairs which are disconnected with each other. How can I get 1000(or more) such node pairs from dataset? Such as:
2761 2788
4777 3365
3631 3553
...
3717 4074
3013 2225
Each line is a pair of nodes without edge.
2 answers

Just do a BFS or DFS to get the size of every connected component in
O(E)
time. Then once you have the component sizes, you can get the number of disconnected nodes easily: it's the sum of the products of every pair of sizes.Eg. If your graph has 3 connected components with sizes: 50, 20, 100. Then the number of pairs of disconnected nodes is:
50*20 + 50*100 + 20*100 = 8000
.If you want to actually output the disconnected pairs instead of just counting them, you should probably use unionfind and then just iterate through all pairs of nodes and output them if they're not in the same component.

I think the other options are more general, and probably nicer from a programmatic view. I just had a quick idea how you could get the list in a very easy way using numpy.
First create the adjacency matrix and your list of nodes is an array:
import numpy as np node_list= np.random.randint(10 , size=(10, 2)) A = np.zeros((np.max(node_list) + 1, np.max(node_list) + 1)) # + 1 to account for zero indexing A[node_list[:, 0], node_list[:, 1]] = 1 # set connected nodes to 1 x, y = np.where(A == 0) # Find disconnected nodes diconnected_list = np.vstack([x, y]).T # The final list of disconnected nodes
I have no idea though, how this will work with really large scale networks.
See also questions close to this topic

How to use lambda in Python programming
how do I use lambda in my python programming and how to work with it in programming am trying to know how to use it and working with it

Whenever i choose a random file from a folder it says file not found
I'm trying to make the code to choose a random file from a folder and tweet it on tweeter, but i get an error
I'm on windows 10, haven't tried anything else.
path ='C:/Users/Name/Desktop/twitbot/home/gay' files = os.listdir(path) index = random.randrange(0, len(files)) message = "Picture of the moment!" with open(files[index], 'rb') as photo: twitter.update_status_with_media(status=message, media=photo)
I expect the code to choose a picture and to post it on Twitter, but it says 'FileNotFoundError: [Errno 2] No such file or directory: '753.jpg''
Edit: It does pick a photo from the dir, but it says FileNotFoundError: [Errno 2] No such file or directory: 'numberOfFile.jpg' when it's clearly there where i set it.

How to include number when searching for greater/lesser values in a .csv file?
I have a function that should accept three arguments: 1. some score (number) to search for in a .csv, 2. whether you want to look for a higher or lower value, 3. whether you want to include the score you're searching for. The .csv I'm opening and reading has the values: 59 85 96 76 67 93 63 90 64 71 98 65
So say for example I input a score of 93 and want the higher value, but set it to False. The answer should output to 2. If I set it to True however, it would give an output of 3.
count = 0 for i in nlst: i = int(i) if i > searchItem and Value == 'higher' and Include == True: count = count + 1 elif i < searchItem and Value == 'lower' and Include == False: count = count + 1 return str(count)
Currently I haven't been able to get the 2nd and 3rd argument working. If I write: numberCheck(90, 'higher', True) I get the answer of 3, when it should be 4. I have the feeling this has something to do with my arguments I'm using.

3D interpolation of display RGB using display XYZ and RGB
I am working on a display and I want to interpolate RGB values from XYZ. I used devices to measure XYZ value corresponding to the RGB (9x9x9) values. Now, I have XYZ of my image and I want to convert to display RGB using interpolation of the measured XYZ and RGB.
I have already created a LUT to interpolate the values by finding minimum CIELAB differences and it works well. But it is very slow because it is pixel by pixel based. I tried interp3 function of Matlab or interpn. I have tried in various ways but I am not sure how can I put input to these function. My data look like this(for sample I do not post 9x9x9 data).
XYZlut=[ 0.085954 0.097486 0.13636 0.19496 0.16676 0.62822 0.30154 0.19816 1.3158 0.68737 0.35471 3.4668 1.5699 0.69064 8.4565 3.052 1.2449 16.831 6.2012 2.4245 34.592 12.034 4.5791 67.462 22.04 8.2393 123.88 39.937 14.723 224.99]; RGBlut=[0 0 0 0 0 19 0 0 38 0 0 57 0 0 76 0 0 95 0 0 114 0 0 133 0 0 152 0 0 171] ; NewXYZ= [62.10263 66.39005 169.7117 76.8211 82.24026 201.0852 73.61543 78.18117 195.2129 71.9406 76.70645 190.4692]; [NewXYZ1, NewXYZ2, NewXYZ3]=meshgrid(NewXYZ(:,1),NewXYZ(:,2),NewXYZ(:,3)); newRGB=interp3(XYZlut(:,1),XYZlut(:,2),XYZlut(:,3),RGBlut,NewXYZ1, NewXYZ2, NewXYZ3,'cubic')
I want to interpolate newRGB corresponding to NewXYZ by using XYZlut and RGBlut. I will be thankful if somebody guide me how can I use interp3/interpn or any other Matlab function for interpolation of this type of data. I am working on Matlab2018b.

How to partition my image into blocks using matlab for example ? 5 x 5 or 25 x 25
I have an image, so I want to partition into these following blocks 5 x 5 11 x 11 25 x 25 because I have the first time using Matlab,,, thank you for anyone help me

I have this Python piece of code which I don't understand what it is doing and I want to translate it to Matlab
I don't understand what is the function of p. and what it is trying to access.
class Myclass(object): def __init__(self, my_id, predictors, data_dir): self.predictors = predictors self.my_id = my_id self.data_dir = data_dir for p in predictors: p.my_id = self.my_id p.data_dir = self.data_dir for p in self.predictors: if p.var_name == "something" ...

Draw 3D Lines from vector points on Android
I would like to draw a 3D line from points in a kind of graph using Unity. The goal is to draw a path, something like this: https://i.stack.imgur.com/HhkOX.jpg?fbclid=IwAR2hRKT8Zl8GLBbUOl50H3qlJD3_Hg2uRCpba8DhZozmA7ujdOwFtot77w
It looks a lot more complicated than I thought. I also need to draw it from code because I get values of my points at runtime. Does anyone have an idea?
Thanks all!!

how to find bridges in graph with iterative dfs?
I need to find bridges in a graph with iterative dfs the code is a recursive one, and I have no idea about how to convert it to an iterative dfs
void bridgeUtil(int u, boolean visited[], int disc[], int low[], int parent[]) { visited[u] = true; disc[u] = low[u] = ++time; Iterator<Integer> i = adj_sub[u].iterator(); while (i.hasNext()) { int v = i.next(); if (!visited[v]) { parent[v] = u; bridgeUtil(v, visited, disc, low, parent); low[u] = Math.min(low[u], low[v]); if (low[v] > disc[u]) { System.out.print(u); System.out.print(" "); System.out.println(v); } } else if (v != parent[u]) { low[u] = Math.min(low[u], disc[v]); } } } void bridge() { boolean visited[] = new boolean[V]; int disc[] = new int[V]; int low[] = new int[V]; int parent[] = new int[V]; for (int i = 0; i < V; i++) { parent[i] = NIL; visited[i] = false; } for (int i = 0; i < V; i++) if (visited[i] == false) { bridgeUtil(i, visited, disc, low, parent); //DFS_sub(i, visited, disc, low, parent); } }

graphql use bodyparser middleware?
I used graphql and koa2. I don't want to apply graphql to all the interfaces. I want to try Restful API + graphql
const Koa = require('koa') const json = require('koajson') const bodyparser = require('koabodyparser') const config = require('./config') const port = config.URL.port const cors = require('@koa/cors') const UserRoute = require('./routes/user.router') const { ApolloServer, gql } = require('apolloserverkoa') const app = new Koa() const typeDefs = gql` type Book { title: String author: String } type Query { books: [Book] } ` const resolvers = { Query: { books: () => books, }, } app.use(cors({ origin: '*', credentials: true, methods: ['PUT', 'POST', 'GET', 'DELETE', 'OPTIONS'], allowedHeaders: ['ContentType', 'ContentLength', 'Authorization', 'Accept', 'XRequestedWith', 'xaccesstoken'] })) app.use(UserRoute.routes(), UserRoute.allowedMethods()) app.use(bodyparser()) app.use(json()) const server = new ApolloServer({ typeDefs, resolvers }) server.applyMiddleware({ app, path: config.URL.graphql, }) module.exports = app.listen(port)
I just want to use graphql on some of the interfaces.
I use postman for debugging. Any good suggestions?
thank you đź™Ź

How to compute a probability matrix based on a binary matrix?
library(igraph) set.seed(41) n<10 A < sample.int (2, n*n, TRUE)1L; dim(A) < c(n,n); m < sum(A) g < graph_from_adjacency_matrix(A) k_in < degree(g, v = V(g), mode = "in", loops = TRUE, normalized = FALSE)#; k_in k_out < degree(g, v = V(g), mode = "out", loops = TRUE, normalized = FALSE)#; k_out p < (k_in %*% t(k_out) / (2*m))/(k_in %*% t(k_out) / (2*m) + k_in %*% t(k_out) / (2*m)) round(p, 3)
All values of probability matrix
p
is 0.5.I think the error in the denominator of p, because matrix A is not symmetry.
Question. How to specify the denominator correctly?
Edit. After the StĂ©phane Laurent's answer.
I think we should have for different value: k_j_out, k_i_in, k_i_out, k_j_in.
Finally, I need to obtain the weight matrix, W.
I < matrix(0, n, n); diag(I) < 1 W < A %*% (I  P)  t(A) %*% (I  P)
And I think this matrix should symmetric.

Compute ALL spanning trees of a directed acyclic graph using igraph, network, or other R package
I want to compute the complete set of spanning trees for a graph. The graphs I'm working with are small (usually less than 10 nodes).
I see functionality for computing the minimum spanning tree with
igraph
:library(igraph) g < sample_gnp(100, 3/100) g_mst < mst(g)
and I see a previous StackOverflow post that described how to compute a spanning tree using a breadthfirst search. The code below is adapted from the accepted answer:
r < graph.bfs(g, root=1, neimode='all', order = TRUE, father = TRUE) h < graph(rbind(r$order, r$father[r$order, na_ok = TRUE])[,1], directed = FALSE)
However, I don't know how to adapt this to compute multiple spanning trees. How would one adapt this code to compute all spanning trees? I'm thinking that one piece of this would be to loop through each node to use as the "root" of each tree, but I don't think that takes me all the way there (since there could still be multiple spanning trees associated with a given root node).
EDIT
The endgoal is to compute the distortion of a graph, which is defined as follows (link, see page 5):
Consider any spanning tree T on a graph G, and compute the average distance t = E[H_{T}] on T between any two nodes that share a link in G. The distortion measures how T distorts links in G, i.e. it measures how many extra hops are required to go from one side of a link in G to the other, if we are restricted to using T. The distortion is defined [13] to be the smallest such average over all possible Ts. Intuitively distortion measures how treelike a graph is.
[13] R. G. H. Tagmunarunkit and S. Jamin, â€śNetwork topology generators: degreebased vs. structural,â€ť in SIGMCOMM, 2002.

How to create layout that will plot nodes in the same group close together?
Using igraph, I aim to use community detection approaches, as i would like to draw a network layout that makes visible the distinct communities and their connections.
This is my code so far:
library(igraph) dat=read.csv(file.choose(),header=TRUE) # choose an edgelist in .csv file format
I have a data frame with the following parameters;
> > Var1 = node 1 Var2 = node 2 Value = edges (markov chain probabilities) > >
> head (dat, 100) > Var1 Var2 value > 1 4 4 0.833333333 > 2 10 4 0.000000000 > 3 11 4 0.000000000 > 4 12 4 0.000000000 > 5 13 4 0.000000000 > 6 21 4 0.000000000 > 7 23 4 0.000000000 > 8 31 4 0.000000000 > 9 41 4 0.000000000 > 10 42 4 0.000000000 > 11 43 4 0.000000000 > 12 44 4 0.000000000 > 13 45 4 0.000000000 > 14 46 4 0.000000000 > 15 47 4 0.000000000 > 16 48 4 0.000000000 > 17 52 4 0.000000000 > 18 53 4 0.000000000 > 19 61 4 0.000000000 > 20 62 4 0.000000000 > 21 63 4 0.000000000 > 22 71 4 0.000000000 > 23 81 4 0.000000000 > 24 82 4 0.000000000 > 25 83 4 0.000000000 > 26 91 4 0.000000000 > 27 92 4 0.000000000 > 28 93 4 0.000000000 > 29 100 4 0.000000000 > 30 111 4 0.000000000 > 31 4 10 0.000000000 > 32 10 10 0.000000000 > 33 11 10 0.000000000 > 34 12 10 0.010695187 > 35 13 10 0.000000000 > 36 21 10 0.000000000 > 37 23 10 0.000000000 > 38 31 10 0.000000000 > 39 41 10 0.010869565 > 40 42 10 0.000000000 > 41 43 10 0.000000000 > 42 44 10 0.000000000 > 43 45 10 0.000000000 > 44 46 10 0.000000000 > 45 47 10 0.000000000 > 46 48 10 0.000000000 > 47 52 10 0.000000000 > 48 53 10 0.000000000 > 49 61 10 0.000000000 > 50 62 10 0.074074074 > 51 63 10 0.000000000 > 52 71 10 0.000000000 > 53 81 10 0.000000000 > 54 82 10 0.000000000 > 55 83 10 0.000000000 > 56 91 10 0.000000000 > 57 92 10 0.000000000 > 58 93 10 0.000000000 > 59 100 10 0.010526316 > 60 111 10 0.018867925 > 61 4 11 0.166666667 > 62 10 11 0.000000000 > 63 11 11 0.973409307 > 64 12 11 0.010695187 > 65 13 11 0.126126126 > 66 21 11 0.000000000 > 67 23 11 0.000000000 > 68 31 11 0.000000000 > 69 41 11 0.000000000 > 70 42 11 0.008928571 > 71 43 11 0.000000000 > 72 44 11 0.038461538 > 73 45 11 0.000000000 > 74 46 11 0.000000000 > 75 47 11 0.000000000 > 76 48 11 0.000000000 > 77 52 11 0.000000000 > 78 53 11 0.000000000 > 79 61 11 0.000000000 > 80 62 11 0.000000000 > 81 63 11 0.333333333 > 82 71 11 0.000000000 > 83 81 11 0.000000000 > 84 82 11 0.000000000 > 85 83 11 0.000000000 > 86 91 11 0.071428571 > 87 92 11 0.006622517 > 88 93 11 0.000000000 > 89 100 11 0.005263158 > 90 111 11 0.018867925 > 91 4 12 0.000000000 > 92 10 12 0.000000000 > 93 11 12 0.003798670 > 94 12 12 0.673796791 > 95 13 12 0.099099099 > 96 21 12 0.000000000 > 97 23 12 0.000000000 > 98 31 12 0.029702970 > 99 41 12 0.141304348 > 100 42 12 0.017857143 > > dim (dat) > [1] 900 3 g = graph.data.frame(dat[,c('Var1','Var2')], directed = F) # coerces the data into a twocolumn matrix format that igraph likes cluster=cluster_walktrap(g) list=groups (cluster) g$value<cluster$membership[as.character(g$Var1)] g < simplify(g) # remove loops and multiple edges plot( cluster,g, vertex.size = 5, edge.width = .1)
The output doesn't make any sense to me. Could you help me please? Thanks

User experience with multilanguage website
I'm creating a website that is a kind of search engine of registered customers. Out of three offered languages, the customers can choose a default one, just like the visitors.
When a visitor visits a profile of a registered customer, the employer wants the language of the entire website's interface to be changed to a customer's language.
What would be the advantages and disadvantages of this idea?

Share Count on Twitter, Linkedin, Pinterest and Instagram
I have tried
https://stackoverflow.com/questions/21012357/facebooktwitterlinkedinsharelinkcount
But in this link Linkedin always give 0 count where i have just shared the link twice on linkedin. AND On Twitter api is not working to get share count. AND On Pinterest i don't know when i share the content or url on pinterest the only image get shared then how i can get the count of shared url on it. I am doing this in rails as well as in js I have also tried many Gems Such as
> scouter social_url_stats share_counts social_share_count
But in all this gem i didn't get any solution.

How to increase follower of any specific channel available on twitch
I am trying to increase the followers of a specific channel on Twitch by using Twitch Follower Bot Farmer, which is in Python, whose documentation is available here  https://github.com/MD3XTER/TwitchFarmer
Every time I try to run this script, it shows me No more proxies!
Some one please tell what I am doing wrong here, or let me know some other script to do the same task.
Thanks in advance.