Passing dictionary for receiver_tensor in tf.estimator.export.ServingInputReceiver
I am trying to use the following json_serving_fn for predict on my tf model.
def json_serving_input_fn():
"""Build the serving inputs."""
feature_placeholders = {'a': tf.placeholder(shape=[None], dtype='string'),
'b': tf.placeholder(shape=[None], dtype='string'),
'c': tf.placeholder(shape=[1,10], dtype='string')}
return tf.estimator.export.ServingInputReceiver(
features=feature_placeholders,
receiver_tensors=feature_placeholders)
The train and evaluation processes complete as intended (and orthogonal to the issue), but my serving function fails with the following error code:
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'serving_default' : Classification input must be a single string Tensor; got {'symptoms': <tf.Tensor 'Placeholder_2:0' shape=(1,10) dtype=string>, 'age': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=string>, 'sex': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=string>}
I intend to pass the following json for prediction:
{"a":"x","b":"y","c":["p1","p2","","","","","","","",""]}
How do I rewrite my json_serving_input_fun to accomplish that?
See also questions close to this topic

execute a python script from another one and get the output
I have already searched in the forum but I have not found anything that solves my problem. I have two simple script P1 and P2. P1 should execute P2 and should print its return value
P1
import sys import subprocess sys.path.append('/anaconda2/lib/python2.7/') output = subprocess.check_output('python P2.py', shell=True) print output
P2
def foo(): var1 = 3 var2 = 6 return var1 + var2 foo()
if I run P1 I don't receive anything as output but if I run P2 it prints correctly the value 9. What's wrong ? thanks

How to compare 2 images value with opencv?
i have a homework that ask me to explain image before and after proccess, for example before and after apply a greyscale operation with opencv. Any idea how to explain the proccess? By matrices value of pixels maybe? im pretty new to opencv, please help me.

How can I merge two dataframes of different row lengths based on a common column
I have two dataframes of different lengths and i want to join them together based on a common value in a specific column. I want the numbers column to be summed if the Ticker column matches. for example I want a new column and the value for EMBR3 BZ to be 2967205158 (2967200592 + 4566) while the row for ticker ASURB MM to be kept at 3356205474 as it is not present in df2.
to add to this i also have columns after the number column in both dataframes which are not shown below because there are too many columns and i dont want to lose them in my final output. if i use pd.merge i lose the columns after Number in df1. im really struggling with this and would appreciate if someone could help me thanks!
df1 Ticker Number EMBR3 BZ 2967200592 LREN3 BZ 7655250160 ASURB MM 3356205474 ISA 2095646662 DFD 6765767657 L65N3 BZ 765545664 df2 Ticker Number EMBR3 BZ 4566 LREN3 BZ 3776 ISA 46575 output Ticker Number New Number EMBR3 BZ 2967200592 2967205158 LREN3 BZ 7655250160 7655253936 ASURB MM 3356205474 3356205474 ISA 2095646662 2095693237 DFD 6765767657 6765767657 L65N3 BZ 765545664 765545664

How does this sequential model work without a time distributed?
I followed a tutorial to make a Keras LSTM model that has 80 timesteps, looks at 80 words per timestep, and predicts 1 word at a time. Now that I'm making a different LSTM model with the functional API, I'm not sure how my other model works without a Time Distributed layer. I'm going to list the first LSTM model below. How is it that the following layer makes 80 separate predictions at different points in time in the same batch without a Time distributed layer?
model = keras.Sequential() model.add(keras.layers.Embedding(15938, 150, input_length=80)) model.add(keras.layers.CuDNNLSTM(1024)) model.add( keras.layers.Dense(15938, activation='softmax') ) arrayOfArraysToTrainOnInputF = np.empty( [80, 80], dtype=int ) arrayOfArraysToTrainOnTargetF = np.empty( [80, 15938], dtype=int ) model.train_on_batch( arrayOfArraysToTrainOnInputF,arrayOfArraysToTrainOnTargetF )

What is the complexity of strided slice in Tensorflow?
I am wondering what is the complexity of the strided slice function in Tensorflow. Obviously, it is not as computationally intensive as a Convolution 2D, but it's certainly not free neither. I'm not even sure talking of complexity for this operation is meaningful since there is no addition or multiplication performed. To be concrete, let's say I have a 10x3x3x10 tensor
foo
and I want to performbar=foo[3:5,:,:,4:5]
. How would you evaluate the complexity of the operation (both in terms of space and time)? 
Getting results from tf.metrics
I'm trying to get AUC metrics for a keras model, the API docs don't explain this method in much detail
tf.metrics.auc( labels, predictions, weights=None, num_thresholds=200, metrics_collections=None, updates_collections=None, curve='ROC', name=None, summation_method='trapezoidal' )
Does anyone know how to get from the output of this method to a result? If i run this with labels and predictions as one hot encoded vectors (working with sklearn.metrics) i get back
(<tf.Tensor 'auc_4/value:0' shape=() dtype=float32>, <tf.Tensor 'auc_4/update_op:0' shape=() dtype=float32>)
But i don't understand what to do with this tensor to get back the metric?

How to force the predict() to predict a fix amount of values
We are using Bayesian models to predict NBA allstar selection based on different performance stats. There are 24 allstars selected each year. Unfornatunely, we can't find a way to make our prediction model understand this. It is either predicting too few or too many allstars. Allstar is included in the data as a binary column (1 = if the player makes the allstar team, 0 = if the player do not)
example of the code:
predict(fitBN, response = targetVar, newdata = testSet, predictors = names(test)[col.target.var])
Is there any way or arguments to force the predict()function to predict exactly 24 allstar players?

Predicting probability of disease according to a continuous variable adjusting by confusing variables
I have a doubt regarding to the R package "margins". I'm estimating a logistic model:
modelo1 < glm(VD ~ VE12 + VE.cont + VE12:VE.cont + VC1 + VC2 + VC3 + VC4, family="binomial", data=data)
Where:
VD2
is a dichotomous variable (1 disease / 0 not disease)
VE12
is a dichotomous exposure variable (with values 0 an 1)
VE.cont
a continuous exposure variable
VCx
(the rest of variables) are confounding variables.My objective is to obtain predicted probability of disease (
VD2
) for a vector of values ofVE.cont
and for eachVE12
group, but adjusting byVCx
variables. In other words, I would like to obtain the doseresponse line betweenVD2
andVE.cont
byVE12
group but assuming the same distribution ofVCx
for each doseresponse line (i.e. without confounding).Following the nomenclature of this article (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4052139/) I think that I should do a "marginal standardisation" (method 1) that can be done with stata, but I'm not sure how can I do it with R. I'm using this syntax (with R):
cdat0 < cplot(modelo1, x="VE.cont", what="prediction", data = data[data[["VE12"]] == 0,], draw=T, ylim=c(0,0.3)) cdat1 < cplot(modelo1, x="VE.cont", what="prediction", data = data[data[["VE12"]] == 1,], draw=marg"add", col="blue")
but I'm not sure if I'm doing it right because this approach gives similar results as using the model without confounding variables and the function
predict.glm
.modelo0 < glm(VD2 ~ VE12 + VE.cont + VE12:VE.cont, family="binomial", data=data)
Perhaps, I should use the margins option but I don't understand the results because the values obtained in the column
VE.cont
are not in the probability scale (between 0 and 1).x < c(1,2,3,4,5) margins::margins(modelo1, at=list("VE.cont"=x, "VE12"=c(0,1)), type="response")

Can't predict using caret model, due to "error in splineDesign"
I have trained several models using the caret package in R, and the "gamboost" method, which relies on the mboost package.
Most of my models are working fine, but one of them keeps giving me the following error when I try to predict with new data:
Error in splineDesign(k, x, degree + 1, derivs = rep(deriv, length(x)), : empty 'derivs'
The model will predict ok when I use the training data stored in the caret model object (model$trainingData). I have checked the structure of my new data, and it looks the same to me as the training data (i.e. all the variables are the correct data type, and all the factors have the correct levels).
I don't understand the error message. I've tried using predict with several different simulated datasets, but can't get it to work with any except the trainingData.
Any help would be much appreciated.

How to initialize the variables of a pb file in tensorflow without accompanying ckpt files?
I have a simple pb file, without any ckpt file. I would like to (randomly)initialize all the weights of the pb file and save the the initialized weights as ckpt file. I could not find any way to do it. global variable initializer just threw no variables to save

Installing TensorFlow serving without using a docker
To the best of my efforts, I have found that there is no way to install TensorFlow serving without using a docker. Is the use of a docker firmly embedded with TensorFlow Serving or is there a workaround ?

How can I debug predictions on ML Engine, predictions returns empty array
I am implementing a tfx pipeline, similar to the chicago taxi example The prediction of the pushed model returns
{"predictions": []}
. How do I debug this issue?I can see logs of the predictions being made. But because it returns an empty array the status code is 200 and there is no usefull information on what went wrong. I expect the prediction request data isn't passed correctly to the estimator.
The chicago example uses this as their serving receiver and that works. I assume it should also work for my example
def _example_serving_receiver_fn(transform_output, schema): """Build the serving in inputs. Args: transform_output: directory in which the tftransform model was written during the preprocessing step. schema: the schema of the input data. Returns: Tensorflow graph which parses examples, applying tftransform to them. """ raw_feature_spec = _get_raw_feature_spec(schema) raw_feature_spec.pop(_LABEL_KEY) raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn( raw_feature_spec, default_batch_size=None) serving_input_receiver = raw_input_fn() transformed_features = transform_output.transform_raw_features( serving_input_receiver.features) return tf.estimator.export.ServingInputReceiver( transformed_features, serving_input_receiver.receiver_tensors)
The main difference is that I only expect 1 input: a string of programming languages separated by
'': 'javapython'
.I then split that string up in my preprocessing function and make it into a multi one hot encoded array of shape 500 (I have exactly 500 options)
It could also be the case that the prediction isn't being correctly transformed by tf transform. (tf transform is part of the tfx pipeline and runs correctly)
request:
{"instances": ["javascriptpython"]}
response:
{"predictions": []}
expected response:
{"predictions": [520]}
(its a regression model)