I want to generate iris using conditional gan
I want to generate iris data. so i use conditional gan using keras. but it is not generated. so how to generate iris
See also questions close to this topic
Denoising model in keras, which model is appropriate?
I am trying to model sequences, below is an example (True values, y), red dots are true values and blue dots are values that need to be predicted using the red dots and the matrix (below). During training, the blue dots have been set to 0 and I have added a 1/0 mask indicating them as missing; so in total input data is a (50,12) matrix and output a (50,1) vector.
In theory the model should be 100% accurate, however, all models that I have tried smoothen the signal (see example y_hat below). I also tried an LSTM, but that doesn't seem to fit at all very well.
Does someone have an idea about an appropriate model for this problem?
Something that I have tried:
d1 = Dense(64, activation='elu', input_shape=(50,12))(input) d2 = Dense(64, activation='elu')(d1) d3 = Dense(64, activation='elu')(d2) merge = concatenate([d3, aux_input]) flatten = Flatten()(merge) d4 = Dense(64, activation='elu')(flatten) d5 = Dense(64, activation='elu')(d4) d6 = Dense(n_snps, activation='elu')(d5) output = Dense(n_snps)(d6) model = Model(inputs=[input], outputs=[output]) model.compile(loss='mse', optimizer='adam')
tensorflow/keras utils model confusion between _api/v1/keras/ and python/keras
I just try to import vis_utils from
tensorflow.kerasbut it gives me
>>> import tensorflow.keras.utils.vis_utils Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named 'tensorflow.keras.utils.vis_utils'
Checking the location of utils tells me that it points to the wrong (?) directory:
>>> print(tensorflow.keras.utils.__file__) /usr/local/lib/python3.5/dist-packages/tensorflow/_api/v1/keras/utils/__init__.py
But it should actually point to
/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/utils/__init__.pyI've installed everything via pip and tf version is 1.12 on my vanilla Ubuntu 16.04. Is the installation tainted or how do tell python to load the correct module?
Error when checking target: expected activation_1 to have shape (1,) but got array with shape (10,)
I'm having issues with this model, which is trying to forecast stock market 10 days in the future:
model = Sequential() model.add(LSTM(input_shape=(None, INPUT_DIM), units=UNROLL_LENGTH, return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(128, return_sequences=False)) model.add(Dropout(0.2)) model.add(Dense(10, activation='softmax')) model.add(Activation('linear')) start = time.time() model.compile(loss='sparse_categorical_crossentropy', optimizer='adam') model.fit(x_train_unroll, y_train_unroll, batch_size=BATCH_SIZE,epochs=EPOCHS, verbose=2, validation_split=0.05)
ValueError: Error when checking target: expected activation_1 to have shape (1,) but got array with shape (10,)
Shapes of the numpy arrays:
x_train (1968, 50, 3), y_train (1968, 10), x_test (450, 50, 3), y_test (450, 10)
*X_TRAIN_UNROLL* [[[0.12339965 0.1352139 0.11937183] [0.12231633 0.16698145 0.12354637] [0.12261178 0.13978988 0.11837789] ... [0.04057514 0.16677908 0.03448961] [0.03998424 0.16039329 0.03439022] [0.03407524 0.18277416 0.03906172]] *Y_TRAIN_UNROLL* [[0.06529447 0.06007485 0.06165058 ... 0.06342328 0.0627339 0.05465826] [0.06007485 0.06165058 0.06204451 ... 0.0627339 0.05465826 0.05515068] [0.06165058 0.06204451 0.06135513 ... 0.05465826 0.05515068 0.04687808] ... [0.68505023 0.67096711 0.66988379 ... 0.66525507 0.66289147 0.64171755] [0.67096711 0.66988379 0.66968682 ... 0.66289147 0.64171755 0.65195982] [0.66988379 0.66968682 0.67234587 ... 0.64171755 0.65195982 0.64250542]]
Keras: maximising vs minimising activation for visualisation of 1D filters
Apologies in advance, as I am not experienced with deep learning.
I have a set of 1D sequences (each 3000 elements long, each with a label 1-7). I am using a multi-label classifier to identify the distinct patterns from each of the labels.
As per precision-recall, the classifier is doing reasonably well at identifying the different labels (~60-80% accuracy across the different labels).
model = Sequential() model.add(Conv1D(75,2000,strides=1,padding='same', input_shape=X.shape[1:], activation='relu')) model.add(MaxPooling1D(2000)) model.add(Flatten()) model.add(Dropout(0.2)) model.add(Dense(len(categories)+1, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['categorical_accuracy']) model.fit(train_X, train_y,epochs=3,batch_size=100)
I want to visualise what is being recognised by each of the 75 filters in the convolution layer.
To do this I have implemented an analysis demonstrated in F.Chollet's book: "Deep Learning with Python", page 169. Here the filter is visualised by artificially maximizing its' activation function to produce a sequence that it will respond to/ recognise.
def generate_pattern(layer_name, filter_index, size=3000): #size is set to length of my vectors from keras import backend as K layer_output = model.get_layer(layer_name).output loss = K.mean(layer_output[:, :, filter_index]) grads = K.gradients(loss, model.input) grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5) iterate = K.function([model.input], [loss, grads]) #gradient descent to maximize activation input_img_data = input_img_data = np.random.random((1,size, 1)) * 20 + 75 step = 1 for i in range(10000): #number of iterations does not affect the output loss_value, grads_value = iterate([input_img_data]) input_img_data += grads_value * step img = input_img_data return img
In the book the example is performed on a 2D image whereas my data are 1D, but I have tried to re-adapt the above code such that it does the same thing for a 1D tensor.
So I have visualised all of the 75 filters and all of them produce a noisy, uninterpretable line.... See below that I have randomly selected and plotted 5 different filters to show what the output typically looks like...
I would have expected/ hoped for much less noisy lines.... The only other thing that I have not tried is to minimize the activation. Can anyone recommend how to do this? Or am I completely misapplying this technique?
tensorflow.keras can't import Activation
just installed tensorflow-gpu via:
conda install --yes tensorflow-gpu==1.12.0
Now when I run
from tensorflow.keras import layersinto the error:
ImportError: cannot import name 'Activation'
I tried removing tf and keras then reinstalling tf, but hasn't helped.
How does the layer architecture of multiple models connect together in Keras?
I've started to get into Machine Learning with
Keras, and I was testing out some example programs with
LSTMlayers. This example utilizes a multiple model setup for each data source, but I'm confused about how the layer architecture works together. With other ML problems, I like to visualize the network, but usually with
sklearnI'm just using one classifier, and not layering a model myself.
What's the benefit of using
Concatenateon the models created in the
generateModelfunction, and then following that with a series of
Flattenlayers? I've gone through the
Kerasdocumentation on the layers, but I'm still having trouble visualizing what each layer is doing. I'm assuming each layers feeds into another, but why order the layers in this fashion?
def generateModel(self): in = Input(shape=(3, 1)) out = LSTM(units=40, return_sequences=True)(in) out = TimeDistributed(Dense(40))(out) return (out, in) def buildModel(self): if self.multiModel: modelList =  for i in range(16): modelList.append(self.genModel()) m = Concatenate()([x for x in modelList]) m = LSTM(units=25, return_sequences=True)(m) m = TimeDistributed(Dense(25))(m) m = LSTM(units=15, return_sequences=True)(m) m = TimeDistributed(Dense(15))(m) m = LSTM(units=10, return_sequences=True)(m) m = TimeDistributed(Dense(10))(m) m = Flatten()(m) m = Dense(60)(m) merged = Dense(1)(m) kerasModel = Model(inputs=[x for x in modelList], outputs = merged) kerasModel.compile(loss="mean_squared_error", optimizer="adam")