one way conditional statement in binary variable linearization
I am trying to write a oneway conditional statemen with binary variables. my condition is
(if x=1 then y=0)
and it is the only condition that should be met. So I want to have:
(if x=0 then y=0 or 1)
(if x=1 then y=0)
(if y=0 then x=0 or 1)
(if y=1 then x=0 or 1)
the problem of using y<=1x
is that the statement is twoway(for x and y: (if x=1 then y=0) and (if y=1 then x=0) and I want to have only (if x=1 then y=0)).
I have already tried out large M and some of other methods which I knew, however I did not achieve any results.
Can anyone please help me. I stuck on this problem for a couple of days...
Thank you
1 answer

If we have x=1⇒y=0 then it follows that y=1⇒x=0. You cannot have one without the other. This is sometimes called Modus Tollens in propositional logic.
See also questions close to this topic

Minimizing a function with Tensorflow optimizer.minimize() fails due to missing input for function
I am trying to minimize custom function with Tensorflow. The function is complicated and I don't want to implement it with lambda. Is there any way to minimize a custom function with TF optimizer.minimize() method or tf.math.minimize() without passing variable to lambda and writing function in front of it?
I can not write the real function as: func_to_minimize= lambda var1:(var15)**2.
I need to write it as as function and then call it.
import tensorflow_probability as tfp opt = tf.keras.optimizers.Adam(learning_rate=0.1) var1 = tf.Variable(10.0) def func_to_minimize(var1): return ((var1 5)** 2) optim = opt.minimize(func_to_minimize, [var1]).numpy() print(var1)```

How can you optimize a scalar parameter with respect to a Jacobian matrix?
I am trying to optimize two parameters α and β, which are multiplied with a matrix input.
output = some_function(α * matrix1 + β * matrix2)
Now I am calculating my Loss function with respect to the desired output:
loss = 0.5*(truth  output)^2
Obviously this returns a scalar value, however, now I want to optimize for α and β (also scalar values). To perform gradient descent for these two parameters, I need to apply the chain rule, but I get a Jacobian matrix as a result, because I am differentiating a scalar value by matrix1 or matrix2 accordingly.
My goal is to get a single scalar value in the end, by which to update my two learning parameters.
Is there an easy way to circumvent this whole issue in python, scipy, etc?
Can I just take the average change that I calculate in my Jacobian matrix?

making an online and qualified forum
can you introduce me some free forums whose websites are strong and qualified? I wanna write some of my posts on to develop my website.
 How to constrain UITextView's to fill in whitespace?

Struggles with sidebyside NSTableView and Contraint layout, part 2
Previously, I asked about using constraint layout to put two NSTableView sidebyside. Sidebyside NSTableView using StackView and Constraints Thanks to Willeke's help, I was able to achieve this using only constraints, and no StackView.
The recommendation there was:
[...] Xcode is buggy. Avoid resizing the window and/or solve layout issues by updating the frames (in Xcode 9: menu Editor > Resolve Layout Issues > Update Frames).
I'm still having trouble with that, unfortunately. When I run my app, it now looks fine. However, in Interface Builder, clicking "Update Frames" actually makes the layout progressively worse, such that I have to Git reset to get back to a reasonable layout. I'm worried that later, when I actually need to update my view, I will be in trouble because I am unable to touch it without breaking it.
I've tried to illustrate the issue:
I'm unsure if it's helpful, but here is my list of constraints:
What could be wrong here? Have I forgotten some obvious constraint?

keep constraint width when slider value is changed
I want my swift code below to keep the slider value of the width constraint when the sliders value is used for something else. You can see what is going on the gif below. When the value of the slider is used for something else the constraint changes. The constraints should not change when the value of the slider is considered for something else.
import UIKit class ViewController: UIViewController { var frontBox = UIButton() var backBox = UIButton() var selectorB = UIButton() var slider = UISlider() var slidermultipliera: CGFloat = 0.6 var slidermultiplierb: CGFloat = 0.6 var selctorValue = 0 // constraint we will modify when slider is changed var backBoxWidth: NSLayoutConstraint! // constraints we will modify when backBox is dragged var backBoxCenterY: NSLayoutConstraint! var backBoxLeading: NSLayoutConstraint! var FrontBoxWidth: NSLayoutConstraint! // constraints we will modify when backBox is dragged var FrontBoxCenterY: NSLayoutConstraint! var FrontBoxLeading: NSLayoutConstraint! var tim = 50.0 override func viewDidLoad() { super.viewDidLoad() [backBox,selectorB,frontBox,slider].forEach{ $0.translatesAutoresizingMaskIntoConstraints = false view.addSubview($0) $0.backgroundColor = UIColor( red: .random(in: 0.0...1), green: .random(in: 0.9...1), blue: .random(in: 0.7...1), alpha: 1 ) } NSLayoutConstraint.activate([ selectorB.bottomAnchor.constraint(equalTo: view.bottomAnchor), selectorB.leadingAnchor.constraint(equalTo: view.leadingAnchor), selectorB.heightAnchor.constraint(equalTo: view.heightAnchor,multiplier: 0.1), selectorB.widthAnchor.constraint(equalTo: view.widthAnchor,multiplier: 1), slider.bottomAnchor.constraint(equalTo: selectorB.topAnchor), slider.leadingAnchor.constraint(equalTo: view.leadingAnchor), slider.heightAnchor.constraint(equalTo: view.heightAnchor,multiplier: 0.1), slider.widthAnchor.constraint(equalTo: view.widthAnchor,multiplier: 1), ]) frontBox.setTitle("Front", for: .normal) // backBox Width constraint backBoxWidth = backBox.widthAnchor.constraint(equalTo: view.widthAnchor, multiplier: 0.2) // backBox CenterY constraint backBoxCenterY = backBox.centerYAnchor.constraint(equalTo: view.centerYAnchor) // backBox Leading constraint backBoxLeading = backBox.leadingAnchor.constraint(equalTo: self.view.leadingAnchor, constant: CGFloat(tim)) // backBox Width constraint FrontBoxWidth = frontBox.widthAnchor.constraint(equalTo: view.widthAnchor, multiplier: 0.2) // backBox CenterY constraint FrontBoxCenterY = frontBox.centerYAnchor.constraint(equalTo: view.centerYAnchor) // backBox Leading constraint FrontBoxLeading = frontBox.leadingAnchor.constraint(equalTo: self.view.leadingAnchor, constant: CGFloat(tim)) slider.setValue(Float(0.5), animated: false) NSLayoutConstraint.activate([ // backBox Height is constant backBox.heightAnchor.constraint(equalTo: view.heightAnchor,multiplier: 0.5), backBoxWidth, backBoxLeading, backBoxCenterY, frontBox.heightAnchor.constraint(equalTo: view.heightAnchor,multiplier: 0.3), FrontBoxWidth, FrontBoxCenterY, FrontBoxLeading, ]) selectorB.addTarget(self, action: #selector(press), for: .touchDown) slider.addTarget(self, action: #selector(increase), for: .valueChanged) } @objc func press(){ selctorValue = selctorValue == 0 ? 1 : 0 if selctorValue == 1{ backBoxWidth.isActive = false } else { FrontBoxWidth.isActive = false backBoxWidth.isActive = true } } @objc func increase() { if selctorValue == 1{ slidermultipliera = CGFloat(slider.value) // update backBox Width constraint FrontBoxWidth.isActive = false FrontBoxWidth = frontBox.widthAnchor.constraint(equalTo: view.widthAnchor, multiplier: slidermultipliera) FrontBoxWidth.isActive = true } else { slidermultiplierb = CGFloat(slider.value) // update backBox Width constraint backBoxWidth.isActive = false backBoxWidth = backBox.widthAnchor.constraint(equalTo: view.widthAnchor, multiplier: slidermultiplierb) backBoxWidth.isActive = true } } }

Solve Linear Complementary Problem Numerically
I am searching for a way to solve an LCP(Linear Complementary Problem) of the form
w = M*x + q w >= 0 x >= 0 w * x = 0
I read about algorithms such as lemke or Dantzig, but it seems there also exist numerical solvers like GaussSeidel which seem to work better. I tried to find out how they work, but every time I implement one of these algorithms, it does not calculate the correct solution. I basically just want to compute the xvector when M (n x n Matrix) and q (n x 1 vector) are given. If someone is able to show me an implementation of a numerically solver like PGS or something else for computing the vectors x and w, it would really help me. (I am currently working with java)
Thank you

How to count the number of sudoku solutions using PySCIPOpt?
 There is a working model for solving Sudoku
 Now, as part of the research, sudokus are sent to this model, in which only 2 (4, 6, etc.) cells are not filled. You need to count the number of sudoku solutions with erased cells.
 When the sudoku solution count is called, in which there is 100% one solution, the getNCountedSols() method returns a much larger value.
model2.setLongintParam("constraints/countsols/sollimit", 2) model2.hideOutput() model2.count() count_sol = model2.getNCountedSols()
Sorry, I'm using PySCIPOpt for the first time, so I don't know much.

How to get optimal dual solution with Pyomo?
in Internet I only saw how to access dual variables with this code:
# display all duals print ("Duals") for c in instance.component_objects(pyo.Constraint, active=True): print (" Constraint",c) for index in c: print (" ", index, instance.dual[c[index]])
Is there a way to access optimal dual solution in Pyomo?

When is the callback MIPInfoCallback called during IloCplex branch & cut?
I am using the IloCplex library in C++ and I am wondering when exactly the callback MIPInfoCallback is called during resolution. In the documentation it only says "IloCplex calls the userwritten callback regularly during the branchandcut search". Is it called at every node? If so, is it before or after processing the node (i.e. before or after solving the relaxation and adding any cuts)?
Thanks in advance for your answers

Slight difference in objective function of linear programming makes program extremely slow
I am using Google's OR Tool SCIP (Solving Constraint Integer Programs) solver to solve a Mixed integer programming problem using Python. The problem is a variant of the standard scheduling problem, where there are constraints limiting that each worker works maximum once per day and that every shift is covered by only one worker. The problem is modeled as follows:
Where n represents the worker, d the day and i the specific shift in a given day. The problem comes when I change the objective function that I want to minimize from
To:
In the first case an optimal solution is found within 5 seconds. In the second case, after 20 minutes running, the optimal solution was still not reached. Any ideas to why this happens? How can I change the objective function without impacting performance this much?
Here is a sample of the values taken by the variables tier and acceptance used in the objective function.

MIP (ompr) model taking too much time to solve in R
I am trying to solve a capacitated facility location problem in R. The sample data for that:
n< 500 #number of customers m< 20 #number of facility centers set.seed(1234) fixedcost < round(runif(m, min=5000, max=10000)) warehouse_locations < data.frame( id=c(1:m), y=runif(m, 22.4, 22.6), x= runif(m, 88.3, 88.48) ) customer_locations < data.frame( id=c(1:n), y=runif(n, 22.27, 22.99), x= runif(n, 88.12, 88.95) ) capacity < round(runif(m, 1000, 4000)) demand < round(runif(n, 5, 50))
The model with the cost functions:
library(geosphere) transportcost < function(i, j) { customer < customer_locations[i, ] warehouse < warehouse_locations[j, ] (distm(c(customer$x, customer$y), c(warehouse$x, warehouse$y), fun = distHaversine)/1000)*20 } library(ompr) library(magrittr) model < MIPModel() %>% # 1 iff i gets assigned to SC j add_variable(x[i, j], i = 1:n, j = 1:m, type = "binary") %>% # 1 if SC j is built add_variable(y[j], j = 1:m, type = "binary") %>% # Objective function set_objective(sum_expr(transportcost(i, j) * x[i, j], i = 1:n, j = 1:m) + sum_expr(fixedcost[j] * y[j], j = 1:m), "min") %>% #Demand of customers shouldn't exceed total facility capacities add_constraint(sum_expr(demand[i] * x[i, j], i = 1:n) <= capacity[j] * y[j], j = 1:m) %>% # every customer needs to be assigned to a SC add_constraint(sum_expr(x[i, j], j = 1:m) == 1, i = 1:n) %>% # if a customer is assigned to a SC, then this SC must be built add_constraint(x[i,j] <= y[j], i = 1:n, j = 1:m) model library(ompr.roi) library(ROI.plugin.glpk) result < solve_model(model, with_ROI(solver = "glpk", verbose = TRUE))
At this moment, the computation is being done for the results.
Is there any way I can reduce the computation times? If I understand it correctly then 0.4% is the difference between the current model and the desired outcome. I will be happy even if the difference is far greater than that and I can obtain a suitable model. Is there any way I can set that? Like 56% difference will be good enough.