Problem switching from Binary to PositiveInteger domain
Just started with Pyomo and was taking some samples from this knapsack problem tutorial: https://pyomocontribsimplemodel.readthedocs.io/en/latest/knapsack.html
However for my optimization problem I'd like to be able to utilize the PositiveInteger domain. I tried editing the code above as follows:
v = {'hammer':8, 'wrench':3, 'screwdriver':6, 'towel':11}
w = {'hammer':5, 'wrench':7, 'screwdriver':4, 'towel':3}
limit = 14
items = list(sorted(v.keys()))
# Create model
m = ConcreteModel()
# Initialization **ADDED BY ME**
m.x_init = Set(initialize=['hammer','wrench','screwdriver','towel'])
lb = {'hammer':0,'wrench':0,'screwdriver':0,'towel':0}
ub = {'hammer':2,'wrench':2,'screwdriver':2,'towel':2}
# **ADDED BY ME**
def fb(model,i):
return (lb[i],ub[i])
# Variables
m.x = Var(m.x_init,domain=PositiveIntegers,bounds=fb)
## Changed from
# m.x = Var(items, domain=Binary)
# Objective
m.value = Objective(expr=sum(v[i]*m.x[i] for i in items))
# Constraint
m.weight = Constraint(expr=sum(w[i]*m.x[i] for i in items) <= limit)
# Optimize
solver = SolverFactory('glpk')
status = solver.solve(m)
# Print the status of the solved LP
print("Status = %s" % status.solver.termination_condition)
# Print the value of the variables at the optimum
for i in items:
print("%s = %f" % (m.x[i], value(m.x[i])))
# Print the value of the objective
print("Objective = %f" % value(m.value))
And I am getting the error "No value for uninitialized NumericValue object x[hammer]". I'm really stumped. If the line m.x_init = Set(initialize()) isn't creating a set/unordered list of initialized variables what is it doing? Or is it just not in the form of x['index']?
See also questions close to this topic

AttributeError: 'datetime.timezone' object has no attribute 'name' when trying to run Apache Airflow scheduler
I have just setup a new Ubuntu machine, created a Python3.6 venv and installed airflow. I can start the webserver but when I try to run
airflow scheduler
I keep getting this error:File "/home/ubuntu/venv/airflow/lib/python3.6/sitepackages/airflow/models/dag.py", line 398, in following_schedule tz = pendulum.timezone(self.timezone.name) AttributeError: 'datetime.timezone' object has no attribute 'name'
Here is an excerpt of my
pip freeze
:apacheairflow==1.10.5 boto3==1.9.253 Pillow==6.2.1 selenium==3.141.0 slackclient==1.2.1

Importing variables in Python, new topic as the old one does not work for me
I have read all of the answers of the following link : Importing variables from another file? I have tried what they say, to import my variables from one file to another by tapping :
from file1 import *
or
from file1 import var1
Unfortunately it does not work as intented because I am not importing the variable only but the whole file1. That is not the purpose (I really only want var1, the script is rather big) so I am wondering if I am the only one noticing this behaviour as everybody there looks satisfied with the answer.
Thanks for your help.

What happens to an open cursor before rolling back the transaction in Postgres using Psycopg2?
I am trying to copy a list of CSV files into Postgres using Psycopg2's
copy_expert()
.I'm opening cursor to execute copy command for each file separately and closing it after the data has been copied. But, If I get any error for any file in this process, I'm rolling back the transaction.
If I get an error I'm not sure what's going to happen to the cursor that I have opened before copying the CSV file.
Will it be closed automatically after the rollback is done on the connection or will it stay just like that?
I've checked the docs for rollback on psycopg2 http://initd.org/psycopg/docs/connection.html#connection.rollback. But still, I'm unsure of what happens to the cursor that hasn't been closed as they haven't mentioned anything related to the cursor in the docs.
try: for tablename, filename in self.mapping: cur = self.conn.cursor() filename = f"{self.to_db}{wid}{filename}" filename = f"{os.path.join(self.directory, filename)}.csv" sql = f"copy {tablename} from stdin with delimiter as ',' csv header;" with open(f"{filename}", 'r') as file: cur.copy_expert(sql, file) cur.close() self.conn.commit() except Exception as e: self.conn.rollback() return e

Anaconda/Jupyter Notebook Distplot Keyerror Exception
I am trying to use a for loop to create distribution plots for some variables that I am reading from an excel/csv dataset. The code seems to be working fine until just after the target variable proportion pie and bar graphs are plotted. After the pie and bar graphs, I expected to see distribution plots but a KeyError exception is thrown.
import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import warnings import itertools warnings.filterwarnings("ignore") #matplotlib inline print("This is a test right before excel file is read") # Utilize the pandas library to read the data dataset = pd.read_csv(r'C:\userpath\CSS581\Machine Learning Project\pulsar_stars_test.csv') #dataset = pd.read_excel(r'C:\userpath\CSS581\Machine Learning Project\pulsar_stars_test.xlsx') # Print the number of rows and columns that the data has to the user print("This is the number of rows: ", dataset.shape[0]) print("This is the number of columns: ", dataset.shape[1]) # Use pandas to print out the information about the data print("This is the data information: ", dataset.info()) # Use pandas to display information about missing data print("This is the missing data: ", dataset.isnull().sum()) # Make a figure appear to display a dataset summary to the user plt.figure(figsize = (12, 8)) sns.heatmap(dataset.describe()[1:].transpose(), annot = True, linecolor = "w", linewidth = 2, cmap = sns.color_palette("Set2")) plt.title("Data Summary") plt.show() # Instantiate another figure to display some correlation data to the user correlation = dataset.corr() plt.figure(figsize = (10, 8)) sns.heatmap(correlation, annot = True, cmap = sns.color_palette("magma"), linewidth = 2, edgecolor = "k") plt.title("CORRELATION BETWEEN VARIABLES") plt.show() # Compute the proportion of each target variabibble in the dataset plt.figure(figsize = (12, 6)) plt.subplot(121) ax = sns.countplot(y = dataset["target_class"], palette = ["r", "g"], linewidth = 1, edgecolor = "k"*2) for i, j in enumerate(dataset["target_class"].value_counts().values): ax.text(.7, i, j, weight = "bold", fontsize = 27) plt.title("Count for target variable in dataset") plt.subplot(122) plt.pie(dataset["target_class"].value_counts().values, labels = ["not pulsar stars", "pulsar stars"], autopct = "%1.0f%%", wedgeprops = {"linewidth":2, "edgecolor":"white"}) #plt.pie(data["target_class"].value_counts().values, labels = ["not pulsar stars", "pulsar stars"], autopct = "%1.0f%%", wedgeprops = {"linewidth":2, "edgecolor":"white"}) my_circ = plt.Circle((0,0), .7, color = "white") plt.gca().add_artist(my_circ) plt.subplots_adjust(wspace = .2) plt.title("Proportion of target variabibble in dataset") plt.show() for i, j, k in itertools.zip_longest(columns, range(length), colors): plt.subplot(length/2,length/4,j+1) plt.subplot(length/2, length/4, j+1) sns.distplot(dataset[i], color = k) plt.title(i) plt.subplots_adjust(hspace = 0.3) plt.axvline(dataset[i].mean(), color = "k", linestyle = "dashed", label = "MEAN") plt.axvline(dataset[i].std(), color = "b", linestyle = "dotted", label = "STANDARD DEVIATION") plt.legend(loc = "upper right")
I am not very familiar with anaconda/jupyter notebook/python so I do not really know what to research regarding this issue.
What I expected to see output was plots. But I got the following error messages:
KeyError Traceback (most recent call last) ~\Anaconda3\lib\sitepackages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance) 2656 try: > 2657 return self._engine.get_loc(key) 2658 except KeyError: pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() KeyError: 'mean_dmnsr_curve' During handling of the above exception, another exception occurred: KeyError Traceback (most recent call last) <ipythoninput8c22a6f74190e> in <module> 66 plt.subplot(length/2,length/4,j+1) 67 plt.subplot(length/2, length/4, j+1) > 68 sns.distplot(dataset[i], color = k) 69 plt.title(i) 70 plt.subplots_adjust(hspace = 0.3) ~\Anaconda3\lib\sitepackages\pandas\core\frame.py in __getitem__(self, key) 2925 if self.columns.nlevels > 1: 2926 return self._getitem_multilevel(key) > 2927 indexer = self.columns.get_loc(key) 2928 if is_integer(indexer): 2929 indexer = [indexer] ~\Anaconda3\lib\sitepackages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance) 2657 return self._engine.get_loc(key) 2658 except KeyError: > 2659 return self._engine.get_loc(self._maybe_cast_indexer(key)) 2660 indexer = self.get_indexer([key], method=method, tolerance=tolerance) 2661 if indexer.ndim > 1 or indexer.size > 1: pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() KeyError: 'mean_dmnsr_curve'
Does anyone know what could cause these exceptions? I am trying to figure out if I have more than one exception because the error message says "during handling of the above exception, another exception occurred." Since the error message says "another exception," does this mean that two or more exceptions have occurred? The error message seems to be referencing some exception that is located "above." But the error message doesn't say what that "above" exception is? The only line in my code that is indicated as being faulty is line 68, which is where I call sns.distplot.

Pycharm type hints warning for classes instead of instances
I am trying to understand why pycharm warns me of wrong type when using an implementation of an abstract class with static method as parameter.
To demonstrate I will make a simple example. Let's say I have an abstract class with one method, a class that implements (inherits) this interfacelike abstract class, and a method that gets the implementation it should use as parameter.
import abc class GreetingMakerBase(abc.ABC): @abc.abstractmethod def make_greeting(self, name: str) > str: """ Makes greeting string with name of person """ class HelloGreetingMaker(GreetingMakerBase): def make_greeting(self, name: str) > str: return "Hello {}!".format(name) def print_greeting(maker: GreetingMakerBase, name): print(maker.make_greeting(name)) hello_maker = HelloGreetingMaker() print_greeting(hello_maker, "John")
Notice that in the type hinting of
print_greeting
I usedGreetingMakerBase
, and becauseisinstance(hello_maker, GreetingMakerBase) is True
Pycharm is not complaining about it.The problem is that I have many implementations of my class and dont want to make an instance of each, so I will make this
make_greeting
method static, like this:class GreetingMakerBase(abc.ABC): @staticmethod @abc.abstractmethod def make_greeting(name: str) > str: """ Makes greeting string with name of person """ class HelloGreetingMaker(GreetingMakerBase): @staticmethod def make_greeting(name: str) > str: return "Hello {}!".format(name) def print_greeting(maker: GreetingMakerBase, name): print(maker.make_greeting(name)) print_greeting(HelloGreetingMaker, "John")
This still works the same way, but apparently because the parameter in the function call is now the class name instead of an instance of it, Pycharm complains that:
Expected type 'GreetingMakerBase', got 'Type[HelloGreetingMaker]' instead.
Is there a way I can solve this warning without having to instantiate the
HelloGreetingMaker
class? 
How to fix the problem of not learning in CNN with Tensorflow?
I'm creating the same network that I had with keras now with Tensorflow. The thing is that the structure is the same however the neural network is not able to learn and gets stuck in its learnable process.
I've tried everything but I'm not able that the neural network learns.
#We initialize the input data with placeholders tf_data = tf.placeholder(tf.float32, shape=(None, HEIGTH, WIDTH, CHANNELS)) tf_labels = tf.placeholder(tf.float32, shape=(None, LABELS)) FILTER1 = (4,4) STRIDE1 = (2,1) FILTER2 = (2,1) STRIDE2 = (1,1) DEPTH = 32 #32 # Convolutional Kernel depth size == Number of Convolutional Kernels HIDDEN1 = 128 #1024 # Number of hidden neurons in the fully connected layer HIDDEN2 = 256 HIDDEN3 = 512 keep_prob1 = 0.5 keep_prob2 = 0.25 keep_prob3 = 0.5 #CNN w1 = tf.Variable(tf.truncated_normal([FILTER1[0], FILTER1[1], CHANNELS, DEPTH], stddev=0.1)) b1 = tf.Variable(tf.contrib.layers.xavier_initializer([DEPTH])) #output 100 , 9 , 32 w2 = tf.Variable(tf.truncated_normal([FILTER2[0], FILTER2[1], DEPTH, 2*DEPTH], stddev=0.1)) b2 = tf.Variable(tf.constant(1.0, shape=[2*DEPTH])) #output #FC w3 = tf.Variable(tf.truncated_normal([384*650, HIDDEN1], stddev=0.1)) b3 = tf.Variable(tf.constant(1.0, shape=[HIDDEN1])) w4 = tf.Variable(tf.truncated_normal([HIDDEN1, HIDDEN2], stddev=0.1)) b4 = tf.Variable(tf.constant(1.0, shape=[HIDDEN2])) w5 = tf.Variable(tf.truncated_normal([HIDDEN2, HIDDEN3], stddev=0.1)) b5 = tf.Variable(tf.constant(1.0, shape=[HIDDEN3])) w6 = tf.Variable(tf.truncated_normal([HIDDEN3, LABELS], stddev=0.1)) b6 = tf.Variable(tf.constant(1.0, shape=[LABELS])) def logits(data): # Convolutional layer 1 x = tf.nn.conv2d(data, w1, [1, STRIDE1[0], STRIDE1[1], 1], padding='SAME') x = tf.nn.lrn(x,4,bias=1.0, alpha=0.001/9.0, beta=0.75) x = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME') x = tf.nn.relu(x + b1) # Convolutional layer 2 x = tf.nn.conv2d(x, w2, [1, STRIDE2[0], STRIDE2[1], 1], padding='SAME') x = tf.nn.lrn(x,4,bias=1.0, alpha=0.001/9.0, beta=0.75) x = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME') x = tf.nn.relu(x + b2) # Fully connected layer x = tf.reshape(x, (1, 384*650)) layer_1 = tf.nn.relu(tf.matmul(x, w3) + b3) drop_out = tf.nn.dropout(layer_1, keep_prob1) # DROPOUT here layer_2 = tf.nn.relu(tf.matmul(drop_out, w4) + b4) drop_out = tf.nn.dropout(layer_2, keep_prob2) # DROPOUT here layer_3 = tf.nn.relu(tf.matmul(drop_out, w5) + b5) drop_out = tf.nn.dropout(layer_3, keep_prob3) # DROPOUT here return tf.matmul(drop_out, w6) + b6 # Prediction: tf_pred = tf.nn.softmax(logits(tf_data)) #We use the categorical cross entropy loss for training the model. tf_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits(tf_data), labels=tf_labels)) tf_accuracy = 100*tf.reduce_mean(tf.to_float(tf.equal(tf.argmax(tf_pred, 1), tf.argmax(tf_labels, 1)))) tf_opt = tf.train.GradientDescentOptimizer(LR) tf_step = tf_opt.minimize(tf_loss) # ============================================================================= # Train # ============================================================================= init = tf.global_variables_initializer() session = tf.Session() session.run(init) ss = ShuffleSplit(n_splits=STEPS, train_size=BATCH) ss.get_n_splits(train_data, train_labels) history = [(0, np.nan, 2)] # Initial Error Measures for step, (idx, _) in enumerate(ss.split(train_data,train_labels), start=1): fd = {tf_data:train_data[idx], tf_labels:train_labels[idx]} session.run(tf_step, feed_dict=fd) #To plot a graphic if step%5 == 0: fd = {tf_data:valid_data, tf_labels:valid_labels} valid_loss, valid_accuracy = session.run([tf_loss, tf_accuracy], feed_dict=fd) history.append((step, valid_loss, valid_accuracy)) print('Step %i \t Valid. Acc. = %f'%(step, valid_accuracy), end='\n') steps, loss, acc = zip(*history)
It gets stuck in the same accuracy even after a high number of steps.
Step 5 Valid. Acc. = 47.000000 Step 10 Valid. Acc. = 52.999996 Step 15 Valid. Acc. = 47.000000 Step 20 Valid. Acc. = 47.000000 Step 25 Valid. Acc. = 47.000000 Step 30 Valid. Acc. = 47.000000 Step 35 Valid. Acc. = 47.000000 Step 40 Valid. Acc. = 47.000000 Step 45 Valid. Acc. = 47.000000 Step 50 Valid. Acc. = 47.000000 Step 55 Valid. Acc. = 47.000000 Step 60 Valid. Acc. = 47.000000 Step 65 Valid. Acc. = 47.000000 Step 70 Valid. Acc. = 47.000000 Step 75 Valid. Acc. = 47.000000 Step 80 Valid. Acc. = 47.000000 Step 85 Valid. Acc. = 47.000000 Step 90 Valid. Acc. = 47.000000 Step 95 Valid. Acc. = 47.000000 Step 100 Valid. Acc. = 47.000000 Step 105 Valid. Acc. = 47.000000 Step 110 Valid. Acc. = 47.000000 Step 115 Valid. Acc. = 47.000000 Step 120 Valid. Acc. = 47.000000

Python pyomo : how and where to store sumproduct involving decision variables (1d array) and fixed data (matrix)
Brief background : I am trying to solve an optimization problem where I need to select the best store from where an order can be fulfilled. For this illustration I have 2 orders (O1, O2) and 3 stores (str_1, str_2, str_3). While selecting the best store to fulfill an order, there are 4 factors : A, B, C and D. So for fulfilling order 1, each store will have 4 set of scores corresponding to each factor. Score will be between 0 and 1.
I need to determine the optimal weights for 4 factors (wtA, wtB, wtC, wtD  decision variables) such that sumproduct of weights and the score is maximum. (Weights should be between 0 and 100). For instance, say if we check if store 1 can service order 1, then sumproduct = wtA * score_O1_str_1_A + wtB * score_O1_str_1_B + wtC * score_O1_str_1_C + wtD * score_O1_str_1_D
I am having difficulty in storing the above sumproduct for 6 options : O1str_1; O1str_2 ; O1str_3 ; O2str_1 ; O2str_2 ; O2str_3
I have written some code, but I am stuck at the above point. I am new to stackoverflow and pyomo, any help is highly appreciated
Please see the below code to see what I have done and where I am stuck:
from pyomo.environ import * model = ConcreteModel(name="(weights)") # scores for factors A, B, C and D for each order and store combination order_str_scores = { ('O1', 'str_1') : [0.88, 0.85, 0.88, 0.93], # if order 1 is fulfilled from store 2 then these are the scores ('O1', 'str_2'): [0.93, 0.91, 0.95, 0.86], ('O1', 'str_3') : [0.83, 0.83, 0.87, 0.9], ('O2', 'str_1') : [0.85, 0.86, 0.84, 0.98], ('O2', 'str_2') : [0.87, 0.8, 0.85, 0.87], ('O2', 'str_3') : [0.91, 0.87, 0.95, 0.83], } model.orders = list(set([i[0] for i in order_str_scores.keys()])) model.stores = list(set([i[1] for i in order_str_scores.keys()])) # 4 factors (A, B, C & D) whose scores are mentioned in 'order_str_wts' dictionary. These will be indices for decision variables model.factors = ['A', 'B', 'C','D'] # below 4 decision variables (one for each factor) will hold the optimal number between 0  100 def dv_bounds(m, i): return (0, 100) model.x1 = Var(model.factors, within=NonNegativeReals, bounds=dv_bounds) #Sum of these 4 decision variables should be equal to 100 def sum_wts(m): return sum(m.x1[i] for i in model.factors) == 100 model.sum_wts = Constraint(rule=sum_wts) # BELOW IS WHERE I AM FACING THE PROBLEM : # here i want to store the sumproduct of decision variables  model.x1 and scores from order_str_scores # for e.g. if O1 is fulfilled by Store 1 then = 0.88*model.x1['A'] + 0.85*model.x1['B'] + 0.88*model.x1['C'] + 0.93*model.x1['D'] # similarly for the remaining 5 options  O1 > S2 ; O1 > S3 ; O2 > S1 ; O2 > S2 ; O3 > S3 model.x2 = Var(model.orders, model.stores, within=NonNegativeReals) def sum_product(m,i,j): return m.x2[i,j] == sum(m.x1[n] * order_str_scores[i,j][q] for n,q in zip(model.factors,range(4))) model.sum_product = Var(model.orders,model.stores, rule=sum_product) # THIS IS WHAT I WILL DO LATER, IF ABOVE GETS RESOLVED: # then for each order, I want to store the maximum score model.x3 = Var(model.orders, within=NonNegativeReals, bounds=dv_bounds) model.cons = ConstraintList() for i in model.orders: for j in model.stores: model.cons.add(model.x3[i] >= model.x2[i,j]) # I want to maximize the sum of the maximum score I get for each order def obj_rule(m): return sum(m.x3[i] for i in model.orders) model.obj = Objective(rule=obj_rule, sense=maximize)
What I expect is model.x2 decison variable (6 of them for each order store combination) to hold the corresponding sumproduct of the optimal weights (model.x1) and the scores (defined in the data  order_str_scores)
I am getting the below error :
ERROR: evaluating object as numeric value: x2[O1,str_1] (object: <class 'pyomo.core.base.var._GeneralVarData'>) No value for uninitialized NumericValue object x2[O1,str_1] ERROR: evaluating object as numeric value: x2[O1,str_1] == 0.88*x1[A] + 0.85*x1[B] + 0.88*x1[C] + 0.93*x1[D] (object: <class 'pyomo.core.expr.logical_expr.EqualityExpression'>) No value for uninitialized NumericValue object x2[O1,str_1] ERROR: Constructing component 'sum_product' from data=None failed: ValueError: No value for uninitialized NumericValue object x2[O1,str_1]
I want to hold the sumproduct and then use it further .. e.g. x2[O1,str_1] == 0.88*x1[A] + 0.85*x1[B] + 0.88*x1[C] + 0.93*x1[D] etc..
Then the idea is to pick the maximum of the sumproduct for each order. So for order 1 I will have 3 sumproducts (for each store) and then I will pick the maximum amongst them..but right now I am unable to hold them in a variable..
Thank you!

Nested sum in Pyomo
I am brand new to Pyomo, and pretty new to Python. I am trying to write a model that includes an objective function with a nested sum. Here's a toy example that hopefully makes this question relevant to others as well:
I have a set of customers C and a set of servers S. I hire the servers by the minute, and I have a Pyomo Set of costs representing the cost per minute to hire each server (some more expensive than others). This Set has length equal to the number of servers. I also have a twodimensional Pyomo set of service times representing the time needed to serve each customer. This is dependent on both the server and the customer, so it's indexed by both.
The total cost is the sum of the server costs, indexed by server, times the sum of the service times, indexed by both server and customer.
I can't figure out how to represent this in Pyomo, because the sets I'm indexing over are different. This link: Pyomo sum inside sum with various index seems like a similar question, but doesn't help. Keep in mind this all has to live inside a Pyomo
Objective
function.I tried using Pyomo's
sum_product
function and indexing over the Cartesian product of both (index = model.customers * model.servers
), but that doesn't work because servers aren't indexable over customers.I also tried nesting the sums (
sum_product(server costs, sum_product(service times, index = model.customers * model.servers), index = servers)
), but that doesn't work either because the innersum_product
becomes a LinearExpression object that isn't subscriptable by server.How can I properly express this sum? Thanks so much.

Is there any command in pyomo which is similar to the "check" command in YALMIP?
When I use YALMIP in Matlab, I can use "check" command to find out the primal residual for each constraint, and I can know which constraints hold equality at the optimal solution.
So how can I get such information when I use pyomo in python? Is there any command in pyomo that can show the details of the solution?