Keras AttributeError: 'Tensor' object has no attribute 'log'
i am having an error  'Tensor' object has no attribute 'log' that i code in Keras to build a network while apply custom loss function to Keras. I think some how i need to get rid of np.log but not sure how. pls help. thx
import Numpy
import numpy as np
Custom Function
def rmsle(y_pred,y_test):
return np.sqrt(np.mean((np.log(1+y_pred)  np.log(1+y_test))**2))
My network
def base_model():
model = Sequential()
model.add(Dense(50, input_dim=X_train.shape[1], init='normal', activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(1, init='normal'))
sgd = SGD(lr=0.01, momentum=0.8, decay=0.1, nesterov=False)
model.compile(loss=rmsle, optimizer = sgd)# )'adam') #
return model
keras = KerasRegressor(build_fn=base_model, nb_epoch=80, batch_size=1,verbose=1)
keras.fit(X_train ,y_train)
When i check the error msg in detail, it shows that
424 """
425 # score_array has ndim >= 2
> 426 score_array = fn(y_true, y_pred)
427 if mask is not None:
428 # Cast the mask to floatX to avoid float64 upcasting in theano
2 #return np.sqrt(np.mean(np.square( np.log( (np.exp(a)) + 1 )  np.log((np.exp(b))+1) )))
> 4 return np.sqrt(np.mean((np.log(1+y_pred)  np.log(1+y_test))**2))
2 #return np.sqrt(np.mean(np.square( np.log( (np.exp(a)) + 1 )  np.log((np.exp(b))+1) )))
2 answers

Lambda layers in Keras help you to implement functionality that is not prebuilt and which do not require trainable weights. So you get this liberty to implement your own logic as in this case 'Log'
This can also be done using keras
Lambda
layer as below:from keras.layers import Lambda import keras.backend as K
Define your function here:
def logFun(x): return keras.backend.log(x)
And later create a lambda layer:
model.add(Lambda(logFun, ...))

You must use valid tensor operations from your backend (i.e. from keras.backend) in order to define a custom loss function. For example, your loss function could be defined as follows:
import keras.backend as K def rmsle(y_test, y_pred): return K.sqrt(K.mean(K.square(K.log(1 + y_pred)  K.log(1 + y_test))))
NOTE: Keras expects the first argument to be
y_test
(alias the ground truth).
See also questions close to this topic

how to add non .py files into python egg
I have a flask app which looks like
myapp │ └── src │ └── python │ └── config │ └── app │── MANIFEST.in └── setup.py
The config folder is full of *.yaml files, I want to add all the static config files into my python egg after using
python setup.py install
My setup.py looks like
import os from setuptools import setup, find_packages path = os.path.dirname(os.path.abspath(__file__)) setup( name="app", version="1.0.0", author="Anna", description="", keywords=[], packages=find_packages(path + '/src/python'), package_dir={'': path + '/src/python'}, include_package_data=True )
I am trying the use the MANIFEST.in to add the config file However, it always give error
error: Error: setup script specifies an absolute path: /Users/Anna/Desktop/myapp/src/python/app setup() arguments must *always* be /separated paths relative to the setup.py directory, *never* absolute paths.
I have not used any absolute paths in my code, I've seen other posts trying to bypass this error, by removing
include_package_data=True
However, in my case, if i do this to avoid this error, all my yamls won't be added.
I was wondering if there are ways to fix this problem. Thanks

How to extract all functions and API calls used in a Python source code?
Let us consider the following Python source code;
def package_data(pkg, roots): data = [] for root in roots: for dirname, _, files in os.walk(os.path.join(pkg, root)): for fname in files: data.append(os.path.relpath(os.path.join(dirname, fname), pkg)) return {pkg: data}
From this source code, I want to extract all the functions and API calls. I found a similar question and solution. I ran the solution given here and it generates the output
[os.walk, data.append]
. But I am looking for the following output[os.walk, os.path.join, data.append, os.path.relpath, os.path.join]
.What I understood after analyzing the following solution code, this can visit the every node before the first bracket and drop rest of the things.
import ast class CallCollector(ast.NodeVisitor): def __init__(self): self.calls = [] self.current = None def visit_Call(self, node): # new call, trace the function expression self.current = '' self.visit(node.func) self.calls.append(self.current) self.current = None def generic_visit(self, node): if self.current is not None: print("warning: {} node in function expression not supported".format( node.__class__.__name__)) super(CallCollector, self).generic_visit(node) # record the func expression def visit_Name(self, node): if self.current is None: return self.current += node.id def visit_Attribute(self, node): if self.current is None: self.generic_visit(node) self.visit(node.value) self.current += '.' + node.attr tree = ast.parse(yoursource) cc = CallCollector() cc.visit(tree) print(cc.calls)
Can anyone please help me to modified this code so that this code can traverse the API calls inside the bracket?
N.B: This can be done using regex in python. But it requires a lot of manual labors to find out the appropriate API calls. So, I am looking something with help of Abstract Syntax Tree.

use correct version of 'pip' installed for your Python interpreter
I am using pycharm , i am having this error by adding any package , Click Here
i have tried lot of methods , but didn't succeed yet.
Info :
 python 3.6.2
 pip 10.0.1
 VirtualEnv

gcloud mlengine with python3.5 tkinter import error on run
I'm trying to run my model on Google cloud mlengine with:
gcloud mlengine jobs submit training $NAME modulename train.task_w2v \ packagepath train runtimeversion 1.8 pythonversion 3.5 \ scaletier BASIC stagingbucket $BUCKET region $REGION
And this is my setup.py:
from setuptools import find_packages from setuptools import setup REQUIRED_PACKAGES = ['numpy', 'tensorflow', 'pandas', 'matplotlib', 'opencvpython', 'PyYAML', 'coloredlogs', 'scikitlearn', 'scipy', 'matplotlib'] setup( name='ConvMultiAttention', version='0.9', author='name', install_requires=REQUIRED_PACKAGES, packages=find_packages(), include_package_data=True, )
The model runs fine locally and gets successfully build:
I masterreplica0 Successfully installed model0.9 coloredlogs10.0 cycler0.10.0 humanfriendly4.15.1 kiwisolver1.0.1 matplotlib2.2.2 opencvpython3.4.1.15 pyparsing2.2.0 masterreplica0
I masterreplica0 Running command: python3 m train.task_w2v masterreplica0
But then it gets this exception:
masterreplica0 Traceback (most recent call last): File "/usr/lib/python3.5/tkinter/__init__.py", line 36, in import _tkinter ImportError: No module named '_tkinter'
masterreplica0 Command '['python3', 'm', 'train.task_w2v']' returned nonzero exit status 1
Since my understanding is that tkinter is part of python3.5 I don't really know what goes wrong here, or what to do. I tried to run it without matplotlib and with a lower tf version, but the problems consisted.
Also I get these warnings:
googlecloudspanner 0.29.0 has requirement requests<3.0dev,>=2.18.4, but you'll have requests 2.13.0 which is incompatible.
The script humanfriendly is installed in '/root/.local/bin' which is not on PATH.
Which I don't really know how to handle, or if I even need to.

Getting Youtube Favorites/Likes of a channel with API
By using Youtube API, in Python 3.5.2, I want to fetch the likes/favorites of a giving channel id. Here the code I used:
channel_details = youtube.channels().list(id=channel_id, part='snippet, contentDetails').execute() for c_detail in channel_details['items']: c_upload_list_id = c_detail['contentDetails']['relatedPlaylists']['Favorites']
According the doc:https://developers.google.com/youtube/v3/docs/channels "favorites"/"Likes" properties are present in the "contentDetails" object. However, I'm getting the following error:
KeyError: 'favorites'
I get the same error when try to fetch the likes/favorites of my own Youtube Channel (even if they are set to public).
Can someone help me ?
Thanks

Using Scipy's deconvolve function to deconvolve electrodermal activity data
I wish to deconvolve an EDA (electrodermal activity) signal using a Bateman function as the filter as described here, using Scipy's deconvolve function.
However, when I attempt this, the deconvolution graph does not look how I expect it to. Namely, it generally takes the shape of a mostly flat line, sometimes with spikes at multiples of the filter length:
What am I missing here? Should I be smoothing the EDA curve? Am I hoping for too much from
deconvolve
? My code is below:import csv import numpy as np import matplotlib.pyplot as plt import scipy.signal as signal import math with open('test session 1.csv', newline='') as csvfile: filereader = csv.reader(csvfile, delimiter=' ') i = 0 timestamps = [] conductances = [] for row in filereader: i += 1 fields = ' '.join(row).split() if i > 3: timestamps.append(float(fields[0])) conductances.append(float(fields[5])) timestamps = [timestamp  timestamps[0] for timestamp in timestamps] c = 10. tau1 = 300 tau2 = 2000 bateman = [c * ( math.exp(time / tau2)  math.exp(time / tau1)) for time in timestamps] bateman = bateman[3:1700] deconv, remain = signal.deconvolve(conductances, bateman) fig, ax = plt.subplots(nrows=4) ax[0].plot(conductances, label="EDA Signal") ax[1].plot(bateman, label="Bateman Function") ax[2].plot(deconv, label="Deconvolution Result") ax[3].plot(remain, label="Remainder") for i in range(len(ax)): ax[i].legend(loc=4) plt.show()

Jupyter does not show folder in my working directory
I am running docker Jupiter notebook on a MacBook Pro. When starting Jupyter home only shows some of the folders in the working directory. When I cd to a folder and use it as the working directory I get the message "Notebook list is empty." See examples below.
My directory:
LewIssMacBookPro:MyTensorFlow lewleib$ ls
Gorner_tensorflowmnist Tensor2018 models Gorner_tensorflowrnn Untitled.ipynb tensorflow MyDeepTest generate_hmb3.py tensorflowwithoutaphdmaster My_tensor1.html guided testgen NeuralNet1.ipynb install.sh README.md mnist
One level down:
LewIssMacBookPro:MyDeepTest lewleib$ ls README.md guided models generate_hmb3.py install.sh testgen
And one level more:
LewIssMacBookPro:guided lewleib$ ls
chauffeur_guided.py epoch_guided.py ncoverage.py rambo_guided.py
When I try and call Jupiter note book:
LewIssMacBookPro:guided lewleib$ docker run it p 8888:8888 p 6006:6006 v ~/lewleib/MyTensorFlow/MyDeepTest/guided:/notebooks tensorflow/tensorflow
I get the following:
guided Last Modified Name ..seconds ago The notebook list is empty.

TensorFlow 1.3 ROCm port: cannot open '_pywrap_tensorflow_internal'
In Ubuntu 16.04.4, I installed the TensorFlow 1.3 ROCm port (for an AMD Radeon RX Vega 64) according to the instructions starting at "Install required python packages" in
where I had previously installed ROCm from the AMD Debian repository according to the instructions in
https://github.com/RadeonOpenCompute/ROCm
Then, using pip to install the TF .whl package with no virtualization:
$ wget http://repo.radeon.com/rocm/misc/tensorflow/tensorflow1.3.0cp27cp27mumanylinux1_x86_64.whl $ sudo python m pip install tensorflow1.3.0cp27cp27mumanylinux1_x86_64.whl
When I try to verify the installation using
$ python c "import tensorflow as tf; print(tf.__version__)"
I get the following error:
Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/local/lib/python2.7/distpackages/tensorflow/__init__.py", line 24, in <module> from tensorflow.python import * File "/usr/local/lib/python2.7/distpackages/tensorflow/python/__init__.py", line 49, in <module> from tensorflow.python import pywrap_tensorflow File "/usr/local/lib/python2.7/distpackages/tensorflow/python/pywrap_tensorflow.py", line 52, in <module> raise ImportError(msg) ImportError: Traceback (most recent call last): File "/usr/local/lib/python2.7/distpackages/tensorflow/python/pywrap_tensorflow.py", line 41, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "/usr/local/lib/python2.7/distpackages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "/usr/local/lib/python2.7/distpackages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) ImportError: libCXLActivityLogger.so: cannot open shared object file: No such file or directory
I verified that _pywrap_tensorflow_internal.so is present:
$ find / name \*pywrap\* ls 2>/dev/null 27526810 4 rwrr 1 root staff 2558 Jul 20 11:41 /usr/local/lib/python2.7/distpackages/tensorflow/python/pywrap_tensorflow.py 27526811 4 rwrr 1 root staff 1312 Jul 20 11:41 /usr/local/lib/python2.7/distpackages/tensorflow/python/pywrap_tensorflow.pyc 27526813 92 rwrr 1 root staff 93912 Jul 20 11:41 /usr/local/lib/python2.7/distpackages/tensorflow/python/pywrap_tensorflow_internal.pyc 27526815 227172 rwxrxrx 1 root staff 232620600 Jul 20 11:41 /usr/local/lib/python2.7/distpackages/tensorflow/python/_pywrap_tensorflow_internal.so 27526816 72 rwrr 1 root staff 70386 Jul 20 11:41 /usr/local/lib/python2.7/distpackages/tensorflow/python/pywrap_tensorflow_internal.py
Also checked my wheel and pip versions:
$ pip list  grep wheel wheel 0.29.0 $ pip V pip 10.0.1 from  python2.7/sitepackages/pip (python 2.7)
At first glance it looks as if some environmental variable is not set, so that _pywrap_tensorflow_internal.so is not being searched for on the correct path. Can anyone tell me if this is the case  or if the source of the problem is elsewhere? I did some searches and have come up essentially empty. Thanks in advance for any helpful responses.

Calculate the gradients of the last state with respect to the initial state with GRU and understanding gradient tensor sizes in Tensorflow
I have the following model in tensorflow:
def output_layer(input_layer, num_labels): ''' :param input_layer: 2D tensor :param num_labels: int. How many output labels in total? (10 for cifar10 and 100 for cifar100) :return: output layer Y = WX + B ''' input_dim = input_layer.get_shape().as_list()[1] fc_w = create_variables(name='fc_weights', shape=[input_dim, num_labels], initializer=tf.uniform_unit_scaling_initializer(factor=1.0)) fc_b = create_variables(name='fc_bias', shape=[num_labels], initializer=tf.zeros_initializer()) fc_h = tf.matmul(input_layer, fc_w) + fc_b return fc_h def model(input_features): with tf.variable_scope("GRU_Layer1"): cell1 = tf.nn.rnn_cell.GRUCell(gru1_cell_size) # shape=(?, 64) ... gru1_cell_size=64 initial_state1 = tf.placeholder(shape=[None, gru1_cell_size], dtype=tf.float32, name="initial_state1") output1, new_state1 = tf.nn.dynamic_rnn(cell1, input_features, dtype=tf.float32, initial_state=initial_state1) with tf.variable_scope("GRU_Layer2"): cell2 = tf.nn.rnn_cell.GRUCell(gru2_cell_size) # shape=(?, 32)...gru2_cell_size=32 initial_state2 = tf.placeholder(shape=[None, gru2_cell_size], dtype=tf.float32, name="initial_state2") output2, new_state2 = tf.nn.dynamic_rnn(cell2, output1, dtype=tf.float32, initial_state=initial_state2) with tf.variable_scope("output2_reshaped"): # before, shape: (34, 100, 32), after, shape: (34 * 100, 32) output2 = tf.reshape(output2, shape=[1, gru2_cell_size]) with tf.variable_scope("output_layer"): # shape: (34 * 100, 3), num_labels=3 predictions = output_layer(output2, num_labels) predictions = tf.reshape(predictions, shape=[1, 100, 3]) return predictions, initial_state1, initial_state2, new_state1, new_state2
So as we can see from the code that the cell size of the first GRU is 64, the cell size of the second GRU is 32. And the batch size is 34 (but this is not important for me now). And the size of input features is 200. I have tried computing the gradients of the loss with respect to the trainable variables through:
local_grads_and_vars = optimizer.compute_gradients(loss, tf.trainable_variables()) # only the gradients are taken to add them later with the back propagated gradients from previous batch. local_grads = [grad for grad, var in local_grads_and_vars] for v in local_grads: print("v", v)
After printing out the grads I got the following:
v Tensor("Optimizer/gradients/GRU_Layer1/rnn/while/gru_cell/MatMul/Enter_grad/b_acc_3:0", shape=(264, 128), dtype=float32) v Tensor("Optimizer/gradients/GRU_Layer1/rnn/while/gru_cell/BiasAdd/Enter_grad/b_acc_3:0", shape=(128,), dtype=float32) v Tensor("Optimizer/gradients/GRU_Layer1/rnn/while/gru_cell/MatMul_1/Enter_grad/b_acc_3:0", shape=(264, 64), dtype=float32) v Tensor("Optimizer/gradients/GRU_Layer1/rnn/while/gru_cell/BiasAdd_1/Enter_grad/b_acc_3:0", shape=(64,), dtype=float32) v Tensor("Optimizer/gradients/GRU_Layer2/rnn/while/gru_cell/MatMul/Enter_grad/b_acc_3:0", shape=(96, 64), dtype=float32) v Tensor("Optimizer/gradients/GRU_Layer2/rnn/while/gru_cell/BiasAdd/Enter_grad/b_acc_3:0", shape=(64,), dtype=float32) v Tensor("Optimizer/gradients/GRU_Layer2/rnn/while/gru_cell/MatMul_1/Enter_grad/b_acc_3:0", shape=(96, 32), dtype=float32) v Tensor("Optimizer/gradients/GRU_Layer2/rnn/while/gru_cell/BiasAdd_1/Enter_grad/b_acc_3:0", shape=(32,), dtype=float32) v Tensor("Optimizer/gradients/output_layer/MatMul_grad/tuple/control_dependency_1:0", shape=(32, 3), dtype=float32) v Tensor("Optimizer/gradients/output_layer/add_grad/tuple/control_dependency_1:0", shape=(3,), dtype=float32)
Here is the GRU cell from "Mastering TensorFlow 1.x: Advanced machine learning and deep learning concepts using TensorFlow 1.x and Keras" book. Here is the link as well: https://play.google.com/store/books/details?id=xtRJDwAAQBAJ&rdid=bookxtRJDwAAQBAJ&rdot=1&source=gbs_vpt_read&pcampaignid=books_booksearch_viewport
So I was trying to understand the shapes of the gradients after printing out the tensors as shown in the code
local_grads
.From the GRU cell shown above I assumed that:
1 Having a grad tensor with shape
(264, 128)
is used to calculate the activation before the input tor()
andu()
. If the output ofr()
andu()
is 64, then there is a tensor of shape(128)
.2 Since the output size of the GRU is
64
, therefore, I assumed that the input to the second layer GRU will of size 64 + 32(this is the cell size of the second GRU) which gives96
. Hence, similar to point 1, the gradient tensors will have shapes of(96, 64)
and(64)
.3 Given that we have a dense layer after the second GRU layer, since the output is of size 3, then there is a gradient tensor for the corresponding weight of size
(32, 3)
and(3)
My concern is why do we have tensors of shape
(264, 64)
,(64)
and(96, 32)
,(32)
.Second, Assume that I saved the gradients after training the model on the first batch, that is, after feeding a tensor of shape:
(34, 100, 200)
asinput_features
"In the model function argument", and output of shape(34 * 100, 3)
, how to back propagate these gradients on the second minibatch?I would like to fix the gap as in the following image from https://deepmind.com/blog/decoupledneuralnetworksusingsyntheticgradients/:
Where instead of having synthetic gradients, I would like to back propagate the gradients from the previous time step. So I was trying something like:
prev_grads_val__ = tf.gradients([new_state1, new_state2], [initial_state1, initial_state2], grad_ys=previous_gradients)
but this won't work giving the following error:
ValueError: Passed 10 grad_ys for 2 ys
And then
prev_grads_val__
should be added tolocal_grads
before performing the back propagation.Any help is much appreciated!!!

How to train a machine learning algorithm to find this pattern: x1 < x2 without generating a new feature (e.g. x1x2) first?
If I had 2 features x1 and x2 where I know that the pattern is:
if x1 < x2 then class1 else class2
How can I train a machine learning algorithm to find such a pattern?
I know that I could create a third feature x3 = x1x2. Then feature x3 can easily be used by some machine learning algorithms. For example a decision tree can solve the problem 100% using x3 and just 3 nodes (1 decision and 2 leaf nodes).
But, is it possible to solve this without creating new features?
I tried MLP and SVM with different kernels, including svg kernel and the results are not great. This seems like a problem that should be easily solved 100% if a machine learning algorithm could only find such a pattern.
As an example of what I tried, here is the scikitlearn code where the SVM could only get a score of 0.992:
import numpy as np from sklearn.svm import SVC # Generate 1000 samples with 2 features with random values X_train = np.random.rand(1000,2) # Label each sample. If feature "x1" is less than feature "x2" then label as 1, otherwise label is 0. y_train = X_train[:,0] < X_train[:,1] y_train = y_train.astype(int) # convert boolean to 0 and 1 svc = SVC(kernel = "rbf", C = 0.9) # tried all kernels and C values from 0.1 to 1.0 svc.fit(X_train, y_train) print("SVC score: %f" % svc.score(X_train, y_train))
Output running the code:
SVC score: 0.992000
This is an oversimplification of my problem. The real problem may have hundreds of features and different patterns, not just x1 < x2. However, to start with it would help a lot to know how to solve for this simple pattern.

Best Machine Learning Algorithm to Rank Top 10 Items Based on Different Attributes?
I am working on a project where I want to retrieve top n selections from a dataset based on different attributes. Let's say I am looking for the best store to buy a product. The algorithm will take in location, prices, closing/opening times, and return policy, and whether they have the product. It will return the top n (let's say 10) stores out of the dataset that it has found.
I want to know what the best machine learning algorithm is for this scenario.

How to consider word pairs/phrases for Word2Vec and other preprocessing
So it's my first time using Word2Vec and Im using a wikipedia dump with WikiCorpus to preprocess the file before training my Word2Vec model. I want to use the following preprocessing techniques:
 Convert all letters to lowercase (I think WikiCorpus does this already).
 Remove all punctuation (Done by WikiCorpus).
 Consider word pairs/phrases as a single word, for example 'Big Apple' > 'big_apple', not 'big', 'apple'.
 Convert all digits to their word forms, so '3' > 'three'
At the moment I have no idea how to do the last two. I know about num2text but not sure how to incorporate with WikiCorpus or Word2vec. Can anyone help?

Keras AttributeError: 'module' object has no attribute '_TensorLike'
I am practicing using Keras to build a Convolution Neural Network. I decided to follow along this tutorial: http://adventuresinmachinelearning.com/kerastutorialcnn11lines/
However when attempting to convolve my model I run into the following error:
AttributeError: 'module' object has no attribute '_TensorLike'
The following is my code to look at.
from __future__ import print_function import keras from keras.datasets import mnist from keras.layers import Dense, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.models import Sequential import matplotlib.pylab as plt batch_size = 128 num_classes = 10 epochs = 10 # input image dimensions img_x, img_y = 28, 28 # load the MNIST data set, which already splits into train and test sets for us (x_train, y_train), (x_test, y_test) = mnist.load_data() # reshape the data into a 4D tensor  (sample_number, x_img_size, y_img_size, num_channels) # because the MNIST is greyscale, we only have a single channel  RGB colour images would have 3 x_train = x_train.reshape(x_train.shape[0], img_x, img_y, 1) x_test = x_test.reshape(x_test.shape[0], img_x, img_y, 1) input_shape = (img_x, img_y, 1) # convert the data to the right type x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') # convert class vectors to binary class matrices  this is for use in the # categorical_crossentropy loss below y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = Sequential() model.add(Conv2D(32, kernel_size=(5, 5), strides=(1, 1), activation='relu', input_shape=input_shape)) model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) model.add(Conv2D(64, (5, 5), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(1000, activation='relu')) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(), metrics=['accuracy']) class AccuracyHistory(keras.callbacks.Callback): def on_train_begin(self, logs={}): self.acc = [] def on_epoch_end(self, batch, logs={}): self.acc.append(logs.get('acc')) history = AccuracyHistory() model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test), callbacks=[history]) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) plt.plot(range(1, 11), history.acc) plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.show()
I have installed keras and upgraded it to the latest version (2.2.0). I have also installed tensorflow and upgraded it as well and have python version 3.4. My input shape is a (28,28,1) tensor (the 1 because these images are greyscale). Can someone please help me because I am quite lost

Why my code is running slower in GOOGLE COLAB than in my notebook?
I'm beginning to learn how to use Google Colab and in my first test (using Python 3) the code below executed a lot faster im my computer than in Colaboratory(GPU), and i imagine thre's something really wrong.
Here's what i'm doing:
MY COMPUTER with a CPU i572000U (2.5Gh) i installed tensorflow for CPU. The execution time was 65 seconds.
COLABORATORY modified the runtime tipe to GPU and loaded necessary file (.csv with 24kb with colab upload). It took 261 seconds.
I would really apreciate some help to understand what is wrong, maybe something with my code ?
from keras.models import Sequential #camadas sequenciais from keras.layers import Dense import numpy import time import pydot as pydot from keras.utils.vis_utils import plot_model numpy.random.seed(7) #carregar arquivo dataset = numpy.loadtxt("pimaindiansdiabetes.data.csv", delimiter=",") start_time = time.process_time() #dividir entre variaveis de ENTRADA(X) e SAIDA(Y) X = dataset[:,0:8] Y = dataset[:,8] #cria modelo model = Sequential() #adiciona uma camada com 12 neuronios e 8 entradas model.add(Dense(12, input_dim=8, activation='relu')) #rectifier como funcao de ativacao #segunda camada com 8 neuronios model.add(Dense(8, activation='relu')) #camada final para classificar model.add(Dense(1, activation='sigmoid')) #configurar o processo de aprendizagem model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) #treinar o modelo model.fit(X, Y, epochs=500, batch_size=10, verbose=0) #avaliar o modelo scores = model.evaluate(X, Y) print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100)) #testar modelo predections = model.predict(X) rounded = [round(x[0]) for x in predections] print(time.process_time()  start_time, "seconds")

ConvNet Model Accuracy doesn't go beyond 50%
I'm currently training a ConvNet to classify ECGs with Noise and without noise. But my model does not get improved even though I change number of layers or hyperparameters such as learning rate and number of filters.
My Dataset
My dataset has two types of images. One type is "With Noise" which has 448*448 1220 images and other type is ""Without Noise" with same number of images. I've 760 Images from each category combined for testing.
To make the model accurate, I used Keras ImageDataGenerator function to augment images to 15,000 images and 3750 test images.
My Model
Below shows the code of my keras model.
from keras.models import Sequential from keras.datasets import mnist from keras.layers import Conv2D, MaxPooling2D from keras.layers import Activation,Dropout,Flatten,Dense from keras.preprocessing.image import ImageDataGenerator import matplotlib.pyplot as plt from keras.callbacks import TensorBoard from keras.callbacks import EarlyStopping from keras.layers import ZeroPadding2D from keras.optimizers import Adam tensorboard = TensorBoard(log_dir="./logs",histogram_freq=0,write_graph=True,write_images=True) # Variables batchSize = 15 num_of_samples = 15000 learning_rate =0.01 training_imGenProp = ImageDataGenerator(rotation_range=5, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=False, fill_mode='nearest' ) training_imGen = training_imGenProp.flow_from_directory( 'Directory', target_size=(37,224), batch_size=batchSize, color_mode='rgb', class_mode='categorical', classes=['With Noise', 'Without Noise'] ) model = Sequential() model.add(ZeroPadding2D(padding=(187,0),input_shape=(37,224,3))) model.add(Conv2D(64,(3,3),padding='same',activation='relu')) model.add(Conv2D(64,(3,3),padding='same',activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(128,(3,3),padding='same',activation='relu')) model.add(Conv2D(128,(3,3),padding='same',activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) model.add(Dense(128)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(2)) model.add(Activation('sigmoid')) model.summary() adam = Adam(lr=learning_rate) model.compile(loss='binary_crossentropy',optimizer=adam,metrics=['accuracy']) history = model.fit_generator( training_imGen, callbacks = [tensorboard,earlystopping], steps_per_epoch=num_of_samples // batchSize, epochs=30, )
I also generate test data of 3750 images from 380 images using same configurations in above ImageDataGenerator function.
My Problem
This model only gives me 50% of training accuracy. And 54% of testing accuracy. Below are two outputs of tensorboard for two configurations I used.
Below are two sample images from my With Noise and without noise classes.
What I did
So I changed the model by changing,
learning_rate
0.01, 0.1, 0.05, 0.5 configurations. But each time accuracy was 50%. Not even 51%.So I change number of filter in each layer. keeping the same number of layers and layer order, only half of number of filters applied. ex : 32 instead of 64 filters. But still got 50% accuracy and 54% test accuracy.
So I change number of layers and only used one conv layer (32 filters) with max pooling. But 50% training accuracy and 54% testing accuracy.
So I changed dropout from 0.5 to 0.8 as 0.5,0.6,0.7 and 0.8. All 4 times got I got 50% training accuracy and 54% of testing accuracy unbelievably.
I have no idea what cause the problem even though I've changed almost everything of the model and did run almost every combination of possible hyperparameters. But nothing gives me training and testing accuracy higher than 80 or 90%. Any Idea what I should do?
UPDATe
Even after the changed I've made according to suggestions made by nuric, Still I get 50% training accuracy in both binary and categorical configurations.