Python & NetworkX for Network Topology Data
I tried thousands of times to apply "NetworkX" Python library to analyze the datasets found by this link:
http://konect.unikoblenz.de/networks/topology
As long as I execute my Python code to the data found there I got, for example, some unrealistic results.
g = nx.read_weighted_edgelist('out.topology')
g.size()
0
0 as a result for this huge data is completely wrong !
Could you please help me to read this data through "NetworkX" Python library ?
1 answer

As someone mentioned trying reading the list gets errors. However, if you get rid of the first line (
% sym positive
) and try the code below to create your graph it should be fine:import networkx as nx with open("out.topology", 'rt') as f: g = nx.parse_edgelist(f, create_using=nx.DiGraph(), data=[('weight', float), ('timestamp', float)])
The data contains 4 columns: source  target  weight  timestamp of edge.
Just include that info in the arguments as shown in my snippet.
See also questions close to this topic

Multiple decorators wrapping a generator function in python 3.7  all possible coin combinations problem
I want to write a program that shows all the possible ways a combination of different coins (5c, 10c, 50c, etc.) can amount to a certain value (for instance, $4). Let's say, for starters, I only want to consider 50c, $1 and 2$ into the possible combinations. If I want to use brute force, I could write 3 nested
while
loops and return that combination of currency every time they sum up to the value I want (in this case, $4). However, as we consider more currency options (for instance, 1c, 5c, 10c, 25c, 50c, $1, $2, $5, $10, $50, $100) and bigger sums (let's say I want all combinations to get $400), the code becomes very verbose and difficult to mantain. If I want to make a generic algorithm that works for any country and combination of currencies, I can't rely on simple loop nesting.Given this problem, I've tried using decorators so I can nest the loops in a single function:
INTENDED_SUM = 400 coin_count = { 200: 0, 100: 0, 50: 0, } def max_range(value): return INTENDED_SUM / value def coin_loop(coin_value): def decorator(func): while coin_count[coin_value] <= max_range(coin_value): yield from func() coin_count[coin_value] += 1 else: coin_count[coin_value] = 0 return decorator @coin_loop(100) def get_combinations(): agg = sum([i * coin_count[i] for i in coin_count]) if agg == INTENDED_SUM: yield coin_count for i in get_combinations: print(i)
This is the output:
{200: 0, 100: 4, 50: 0}
However, if I add multiple decorators to the
get_combinations()
function, a TypeError is raised:@coin_loop(200) @coin_loop(100) @coin_loop(50) def get_combinations(): agg = sum([i * coin_count[i] for i in coin_count]) if agg == INTENDED_SUM: yield coin_count Traceback (most recent call last): File "31coin_sums.py", line 29, in <module> for i in get_combinations: File "31coin_sums.py", line 15, in decorator yield from func() TypeError: 'generator' object is not callable
So I have two questions:
Why does get_combinations already shows up as a generator and does not need to be called when I call it in the last 2 lines of the code? Shouldn't they be like this?
for i in get_combinations(): print(i)
Why isn't decorator nesting working in this case? Expected output should be something like:
{200: 2, 100: 0, 50: 0} {200: 1, 100: 2, 50: 0} {200: 1, 100: 1, 50: 2} {200: 1, 100: 0, 50: 4} {200: 0, 100: 4, 50: 0} {200: 0, 100: 3, 50: 2} {200: 0, 100: 2, 50: 4} {200: 0, 100: 1, 50: 6} {200: 0, 100: 0, 50: 8}

PythonHow to change timestamp to int in array
I'm trying to change timestamp datatype into int in array, for example,
array([Timestamp('20111229 00:00:00'), Timestamp('20131212 00:00:00'), Timestamp('20140109 00:00:00'), Timestamp('20140129 00:00:00')])
to
array([20111229, 20131212, 20140109, 20140129])
How can I do this?

Creating a dataset in tensorflow by adding random elements of two datasets
I have two sets of images, say set A and set B. These are available as numpy arrays. I will choose x random images from A and y random images from B and then add them together (taking the average at the end). These new images will become the input to a CNN.
Now my question is how can I do this using Tensorflow's data pipeline approach?
I can obviously do it outside of TensorFlow using numpy, but this precreated data will be too big in size. I can also use feed_dicts, but I wanted to know if there's a way to use pipelines to achieve this because that seems to be favoured approach now.

Python matplotlib graphic
I write a script in python3 in which I want to create a bar like graph that will display a sensor value. But I want it to be on the center of the plot and to fade downwards slowly after each value is passed. The values might come every 1 or 2 seconds but it might ocure and two or three values come in a little less than two seconds.
How can I implement this? I have searched both inmatplotplb and in general but with no luck

State machine compression
I have a log of outputs given the inputs, for example:
0 =A=> 1 =B=> 1 =B=> 0 =A=> 1 =A=> 0 =A=> 0
And I would like to find the minimal state machine representing it.
I tried, by hand, to break it down into an ordered list of transitions:
0 =A=> 1
1 =B=> 1
1 =B=> 0
0 =A=> 1
1 =A=> 0
0 =A=> 0
If we considered that there are only two states:
q0
with output0
.q1
with output1
.
The list becomes:
q0 (0) =A=> q1 (1)
q1 (1) =B=> q1 (1)
q1 (1) =B=> q0 (0)
q0 (0) =A=> q1 (1)
q1 (1) =A=> q0 (0)
q0 (0) =A=> q0 (0)
We can see that from the state
q0
, the inputA
leads to q1 in lines 1 & 4, but to stateq0
in line 6. Same issue in theq1
state with the actionB
. So I have to create two additional statesq2
with output0
, andq3
with output1
. I can then rewrite the list the following way:q0 (0) =A=> q1 (1)
q1 (1) =B=> q3 (1)
q3 (1) =B=> q0 (0)
q0 (0) =A=> q1 (1)
q1 (1) =A=> q2 (0)
q2 (0) =A=> q0 (0)
And done.
It seems simple by hand but I can't find an algorithm to achieve that given list of transitions. I know that there are several solutions to this example, but I need that can find one.
I considered to treat this as an optimization problem and use for instance a simulated annealing or a genetic algorithm, but this seems overkill. Plus, I really feel that there is a simple way to do that, maybe something related to graphs theory?
Best regards, Alexandre

Implementing Graph in c++ using OOP
I have to implement an unweighted and undirected graph in c++ using OOP, where the nodes consist of positive integers. I'm not sure if struct is Object oriented and I get an error "Exception thrown: read access violation. newEdge was 0xDDDDDDDD.". Please help.
Here is my code:
#include <iostream> #include <vector> using namespace std; struct edge { int index; edge * next = NULL; }; class vertex { private: int value; edge * begin; public: vertex() { value = 10; begin = new edge; begin = NULL; } edge * getEdge() { return begin; } void newValue( int value ) { this>value = value; } void addToEdge( edge * tobe ) { if ( begin != NULL ) { edge * newEdge = begin; newEdge = this>begin; while ( newEdge>next != NULL ) //the error message is here { newEdge = newEdge>next; } newEdge>next = tobe; } else { begin = tobe; } } void printList() { edge * newPtr = begin; //newPrt = begin; while ( newPtr ) { cout << " >" << newPtr>index; newPtr = newPtr>next; } } void printNum() { cout << value; } ~vertex() { delete begin; } } * graph[15]; void addEdge( int vertex1, int vertex2 ) { edge * temp1 = new edge; edge * temp2 = new edge; temp1>index = vertex2; temp1>next = NULL; graph[vertex1]>addToEdge( temp1 ); temp2>index = vertex1; temp2>next = NULL; graph[vertex2]>addToEdge( temp2 ); delete temp1; delete temp2; } void printGraph() { int i; for ( i = 0; i < 10; i++ ) { cout << " (" << i << ")"; graph[i]>printList(); cout << endl; } } int main() { vertex * vptr[15]; vptr[0] = new vertex; vptr[0]>newValue( 10 ); graph[0] = vptr[0]; vptr[1] = new vertex; vptr[1]>newValue( 20 ); graph[1] = vptr[1]; vptr[2] = new vertex; vptr[2]>newValue( 30 ); graph[2] = vptr[2]; addEdge( 0, 1 ); addEdge( 0, 2 ); addEdge( 1, 2 ); printGraph(); }
I'm not really sure why I get this error message and how to fix it and also If I'm in the right direction of my task.
I can either use an adjacency matrix or an adjacency list approach. But I'm not really sure how to use adjacency list as approach.
Thank you!

How to build a dictionary that map nodes to its degree in networkx2.1,python3?
what I try is here :
def comm_deg(G): nodes = G.nodes() A=nx.adj_matrix(G) deg_dict = {} n = len(nodes) degree= A.sum(axis = 1) for i in range(n): deg_dict[nodes[i]] = degree[i,0] return deg_dict
it shows that KeyError: 0, I find both using
nodes[]
degree[,]
would occur this issuehere is the full error message: File "/Users/shaoyupei/Desktop/code/untitled1.py", line 25, in comm_deg deg_dict[nodes[i]] = degrees[i,0]
File "/anaconda3/lib/python3.6/sitepackages/networkx/classes/reportviews.py", line 178, in __getitem__ return self._nodes[n] KeyError: 0

Remove_edge() removes all nodes in network?
I am trying to implement NewmanGirvan algorithms by using networks. However, I meet some problem here: When I'm trying to print all nodes for each component it basically prints nothing; it looks like
remove_edge(*edge_to_remove(G))
removes all nodes. How come it happens?import networkx as nx def edge_to_remove(G): dict1 = nx.edge_betweenness_centrality(G) list_of_tuples= list(dict1.items()) list_of_tuples.sort(key = lambda x:x[1],reverse = True) return list_of_tuples[0][0] def newman_girvan(G) : c= nx.connected_component_subgraphs(G) l=len(list(c)) while (l== 1): G.remove_edge(*edge_to_remove(G)) c = nx.connected_component_subgraphs(G) l = len(list(c)) for i in c: print (i.nodes()) print ("..........") return c G = nx.Graph()

Graph Union in Networkx
I have two graphs:
G.nodes() = [0,3] H.nodes() = [1,2,3,4]
I am trying to merge the graphs together while only relabeling the nodes of H and maintaining the same labels for G so the resulting graph would have the following nodes:
U.nodes() = [0,3,1,2,5,4]
Where the first two elements are from G and everything else is from H given that there is a conflict of names at node 3 it gets renamed to the next available integer.
disjoint_union from networkx doesn't work because G.nodes() get relabeled to [0,1].
Any help is appreciated!