Installing tensorflow gpu in anaconda 5.3
I have recently installed
anaconda 5.3 and it came with
python 3.7 preinstalled. Now when I checked tensorflow website it's says tensorflow does not support 3.7 only 3.6.
Tensorflow Requires Python 3.4, 3.5, or 3.6
What can I do now to have
tensorflow gpu for my pc. Any help is appreciated. Is there any workaround?
You should create a new conda virtual environment with python 3.6, then install
tensorflowinto that. At the creation of the new
envyou can freely choose the python version you want to use. This is one of the essence of
A bit more details:
Inside of the anaconda-navigator you can choose Environments, then choose create. Here you can give the new environment name and the packages basis i.e.
Rand the versions you want to use it for the environment. Then you have to install your custom packages beside of the default ones. Here you can install tensorflow-gpu too. For installing packages you can choose
Anaconda navigator GUIor inside of the Anaconda command shell with conda
conda install <package>or in the Anaconda command shell
pip install <package>. In general you should prefer Anaconda package management i.e. GUI or conda to the package versions be consistent, what conda manages well. In some cases -choose always the package developer's suggestion- however, you may choose pip install.
See also questions close to this topic
Linear Regression - Unexpected Results (Python)
I am getting unexpected results for the implementation of linear regression I coded. Sometimes I get out of memory error, squaring errors, multiplication errors, basically that I've run out of size.
The code seems pretty okay to me, and I'm unable to identify why it fails to work.
X = np.array([ 6.1101, 5.5277, 8.5186, 7.0032, 5.8598, 8.3829, 7.4764, 8.5781, 6.4862, 5.0546, 5.7107, 14.164 , 5.734 , 8.4084, 5.6407, 5.3794, 6.3654, 5.1301, 6.4296, 7.0708, 6.1891, 20.27 , 5.4901, 6.3261, 5.5649, 18.945 , 12.828 , 10.957 , 13.176 , 22.203 , 5.2524, 6.5894, 9.2482, 5.8918, 8.2111, 7.9334, 8.0959, 5.6063, 12.836 , 6.3534, 5.4069, 6.8825, 11.708 , 5.7737, 7.8247, 7.0931, 5.0702, 5.8014, 11.7 , 5.5416, 7.5402, 5.3077, 7.4239, 7.6031, 6.3328, 6.3589, 6.2742, 5.6397, 9.3102, 9.4536, 8.8254, 5.1793, 21.279 , 14.908 , 18.959 , 7.2182, 8.2951, 10.236 , 5.4994, 20.341 , 10.136 , 7.3345, 6.0062, 7.2259, 5.0269, 6.5479, 7.5386, 5.0365, 10.274 , 5.1077, 5.7292, 5.1884, 6.3557, 9.7687, 6.5159, 8.5172, 9.1802, 6.002 , 5.5204, 5.0594, 5.7077, 7.6366, 5.8707, 5.3054, 8.2934, 13.394 , 5.4369]) y = np.array([17.592 , 9.1302 , 13.662 , 11.854 , 6.8233 , 11.886 , 4.3483 , 12. , 6.5987 , 3.8166 , 3.2522 , 15.505 , 3.1551 , 7.2258 , 0.71618, 3.5129 , 5.3048 , 0.56077, 3.6518 , 5.3893 , 3.1386 , 21.767 , 4.263 , 5.1875 , 3.0825 , 22.638 , 13.501 , 7.0467 , 14.692 , 24.147 , -1.22 , 5.9966 , 12.134 , 1.8495 , 6.5426 , 4.5623 , 4.1164 , 3.3928 , 10.117 , 5.4974 , 0.55657, 3.9115 , 5.3854 , 2.4406 , 6.7318 , 1.0463 , 5.1337 , 1.844 , 8.0043 , 1.0179 , 6.7504 , 1.8396 , 4.2885 , 4.9981 , 1.4233 , -1.4211 , 2.4756 , 4.6042 , 3.9624 , 5.4141 , 5.1694 , -0.74279, 17.929 , 12.054 , 17.054 , 4.8852 , 5.7442 , 7.7754 , 1.0173 , 20.992 , 6.6799 , 4.0259 , 1.2784 , 3.3411 , -2.6807 , 0.29678, 3.8845 , 5.7014 , 6.7526 , 2.0576 , 0.47953, 0.20421, 0.67861, 7.5435 , 5.3436 , 4.2415 , 6.7981 , 0.92695, 0.152 , 2.8214 , 1.8451 , 4.2959 , 7.2029 , 1.9869 , 0.14454, 9.0551 , 0.61705]) theta = np.array([0,0]) #Calculates cost function for given theta def costFunction(X,y,theta): m = y.size hypothesis = (X * theta) + theta return (1/m) * sum((hypothesis - y) ** 2) def slope(X,y,theta): m=y.size hypothesis = (X * theta) + theta theta_0 = 2/(m) * sum(hypothesis - y) theta_1 = 2/(m) * sum((hypothesis - y) * X) return np.array([theta_0,theta_1]) #running the gradient descent with 200 iters with learning rate 0.1 for i in range(200): theta = theta - 0.1*slope(X,y,theta)
Remove list type in columns while preserving list structure
I have two columns that from the way my data was pulled are in lists. This may be a really easy question, I just haven't found the exactly correct way to create the result I'm looking for.
I need the "c_u" column to be a string without the  and the "tawgs.db_id" column to be integers separated by a column if that's possible.
I've tried this code:
df['c_u'] = df['c_u'].astype(str)
to convert c_u to a string: but it failed and outputs: output image What I need the output to look like is:
c_u tawgs.db_id hbhprecision.com 10813,449,6426,6427 thomsonreuters.com 12519,510,6426
Please help and thank you very much in advance!
How can i say "if error occurs do not implement this code"
I want to write a code in python that basically says, " if error occurs while implementing int(a) print('valid character') elif no error occurs while implementing int(a) print('invalid character')." a is an input
I wanna make a simple hangman game and if the input is not a letter i want a certain message to be displayed. I tried using "if a==int()" but inputs are always a string
Deploy Semantic Segmentation Network (U-Net) with TensorRT (no upsampling support)
I am trying to deploy a trained U-Net with TensorRT. The model was trained using Keras (with Tensorflow as backend). The code is very similar to this one: https://github.com/zhixuhao/unet/blob/master/model.py
When I converted the model to UFF format, using some code like this:
import uff import os uff_fname = os.path.join("./models/", "model_" + idx + ".uff") uff_model = uff.from_tensorflow_frozen_model( frozen_file = os.path.join('./models', trt_fname), output_nodes = output_names, output_filename = uff_fname )
I will get the following warning:
Warning: No conversion function registered for layer: ResizeNearestNeighbor yet. Converting up_sampling2d_32_12/ResizeNearestNeighbor as custom op: ResizeNearestNeighbor Warning: No conversion function registered for layer: DataFormatVecPermute yet. Converting up_sampling2d_32_12/Shape-0-0-VecPermuteNCHWToNHWC-LayoutOptimizer as custom op: DataFormatVecPermute
I tried to avoid this by replacing the upsampling layer with upsampling(bilinear interpolation) and transpose convolution. But the converter would throw me similar errors. I checked https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html and it seemed all these operations are not supported yet.
I am wondering if there is any workaround to this problem? Is there any other format/framework that TensorRT likes and has upsampling supported? Or is it possible to replace it with some other supported operations?
I also saw somewhere that one can add customized operations to replace those unsupported ones for TensorRT. Though I am not so sure how the workflow would be. It would also be really helpful if someone could point out an example of custom layers.
Thank you in advance!
How to convert a TensorFlow JSON graph model to .tflite?
I googled about TensorFlow model conversion tools but couldn't find anything. I also looked at whether the Android TensorFlow-lite library has a way to import these graph models, but I didn't find anything either.
Change weights on base of mean prediction keras
I'm setting up a few shot image segmentation with keras. My first network (VGG-like) outputs a feature vector with the shape [2,1024] that separates information of foreground and background.
I want to compare one prototypical output (fix image) with the mean feature vector of all the other images. My loss function is a nearest neighbor implementation to update the weights.
My GPU can only take 4 images max. with an accurate resolution.
My question is, how to calculate the losses based on a mean output of all images while training.
Would be greateful for every hint.
The input is an image with separated marked foreground and marked background with shape [2,512,512,1]
def conv_model(self, input): c1 = Conv2D(self.feat_lev_1, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (input) # added input shape c1 = BatchNormalization()(c1) c1 = Conv2D(self.feat_lev_1, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c1) c1 = BatchNormalization()(c1) p1 = MaxPooling2D((2, 2)) (c1) c2 = Conv2D(self.feat_lev_2, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (p1) c2 = BatchNormalization()(c2) c2 = Conv2D(self.feat_lev_2, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c2) c2 = BatchNormalization()(c2) p2 = MaxPooling2D((2, 2)) (c2) c3 = Conv2D(self.feat_lev_3, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (p2) c3 = BatchNormalization()(c3) c3 = Conv2D(self.feat_lev_3, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c3) c3 = BatchNormalization()(c3) p3 = MaxPooling2D((2, 2)) (c3) c4 = Conv2D(self.feat_lev_4, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (p3) c4 = BatchNormalization()(c4) c4 = Conv2D(self.feat_lev_4, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c4) c4 = BatchNormalization()(c4) # added #d4 = Dropout(0.5)(c4) p4 = MaxPooling2D(pool_size=(2, 2)) (c4) c5 = Conv2D(self.feat_lev_5, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (p4) c5 = BatchNormalization()(c5) c5 = Conv2D(self.feat_lev_5, (3, 3), activation='relu', kernel_initializer='he_normal', padding='same') (c5) c5 = BatchNormalization()(c5) #c5 = Dropout(0.3) (c5) # 0.5 result = GlobalAveragePooling2D()(c5)# activation='softmax' return result
I already used the predict function of keras to extract the mean vector. But running the network always uses the input images separately.
# the prototypical image prototype = model.predict(xv) learner_train = DataGenerator_Learner(X, Y, prototype) # create mean prototype prototypes =  for i in range(len(X)): element = learner_train.get_data(i) prototypes.append(model.predict(element)) mean_prototype = tf.add_n(prototypes) / len(X) result = model.fit_generator(generator=learner_train, epochs=num_epochs, callbacks=[checkpointer, earlystopper], steps_per_epoch=1)
Anaconda Installation: ERROR: The system was unable to find the specified registry key or value
anacondaand setup a new environment from a
That is fine.
Then i tried to open the prompt window of my new environment and i get the following output every single time.
C:\WINDOWS\system32>IF /I [AMD64] == [amd64] set "platform=true"
C:\WINDOWS\system32>IF /I  == [amd64] set "platform=true"
C:\WINDOWS\system32>if defined platform (set "VSREGKEY=HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\14.0" ) ELSE (set "VSREGKEY=HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\14.0" )
C:\WINDOWS\system32>for /F "skip=2 tokens=2,*" %A in ('reg query "HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\14.0" /v InstallDir') do SET "VSINSTALLDIR=%B" ERROR: The system was unable to find the specified registry key or value.
C:\WINDOWS\system32>if "" == "" (set "VSINSTALLDIR=" )
C:\WINDOWS\system32>if "" == "" ( ECHO "WARNING: Did not find VS in registry or in VS140COMNTOOLS env var - your compiler may not work" GOTO End ) "WARNING: Did not find VS in registry or in VS140COMNTOOLS env var - your compiler may not work" The system cannot find the batch label specified - End
I have no clue how to fix this. Any help ?
How do I create an anaconda configuration file (error parsing .yml file)?
I created a environment specification for Anaconda by typing
conda env export > C:/Users/swansom/test144.yml
I attempted to install this environment by typing
conda env create -f C:\Users\swansom\test144.yml
When I try to install an environment from a .yml file generated from my version of anaconda, it prints the following stack trace:
Traceback (most recent call last): File "C:\Users\swansom\AppData\Local\Continuum\anaconda3\lib\site-packages\conda\exceptions.py", line 1043, in __call__ return func(*args, **kwargs) File "C:\Users\swansom\AppData\Local\Continuum\anaconda3\lib\site-packages\conda_env\cli\main.py", line 73, in do_call exit_code = getattr(module, func_name)(args, parser) File "C:\Users\swansom\AppData\Local\Continuum\anaconda3\lib\site-packages\conda_env\cli\main_create.py", line 77, in execute directory=os.getcwd()) File "C:\Users\swansom\AppData\Local\Continuum\anaconda3\lib\site-packages\conda_env\specs\__init__.py", line 40, in detect if spec.can_handle(): File "C:\Users\swansom\AppData\Local\Continuum\anaconda3\lib\site-packages\conda_env\specs\yaml_file.py", line 18, in can_handle self._environment = env.from_file(self.filename) File "C:\Users\swansom\AppData\Local\Continuum\anaconda3\lib\site-packages\conda_env\env.py", line 144, in from_file return from_yaml(yamlstr, filename=filename) File "C:\Users\swansom\AppData\Local\Continuum\anaconda3\lib\site-packages\conda_env\env.py", line 129, in from_yaml data = yaml_load_standard(yamlstr) File "C:\Users\swansom\AppData\Local\Continuum\anaconda3\lib\site-packages\conda\common\serialize.py", line 76, in yaml_load_standard return yaml.load(string, Loader=yaml.Loader, version="1.2") File "C:\Users\swansom\AppData\Local\Continuum\anaconda3\lib\site-packages\ruamel_yaml\main.py", line 638, in load loader = Loader(stream, version, preserve_quotes=preserve_quotes) File "C:\Users\swansom\AppData\Local\Continuum\anaconda3\lib\site-packages\ruamel_yaml\loader.py", line 46, in __init__ Reader.__init__(self, stream, loader=self) File "C:\Users\swansom\AppData\Local\Continuum\anaconda3\lib\site-packages\ruamel_yaml\reader.py", line 80, in __init__ self.stream = stream # type: Any # as .read is called File "C:\Users\swansom\AppData\Local\Continuum\anaconda3\lib\site-packages\ruamel_yaml\reader.py", line 112, in stream self.check_printable(val) File "C:\Users\swansom\AppData\Local\Continuum\anaconda3\lib\site-packages\ruamel_yaml\reader.py", line 233, in check_printable 'unicode', "special characters are not allowed") ruamel_yaml.reader.ReaderError: unacceptable character #x0000: special characters are not allowed in "<unicode string>", position 3 `$ C:\Users\swansom\AppData\Local\Continuum\anaconda3\Scripts\conda-env-script.py create -f C:\Users\swansom\test144.yml````
Please let me know what the problem could be
what are the steps in learning python? can i get some homework please, thank you
I have started to study about this programming language and on my way in learning to code.. I have started watching video tutorials, and read ebooks and understand the basics and concepts of python.. now I do not no how to proceed further, I feel like I cannot keep watching the tutorials all time..
Steps I Tried :
I have read some posts that suggests to start writing any basic programs to keep moving on the learning process.. so I went forward and wrote a code for calculator with the basic operations and the power calculation operation.. I have learnt some things like how a " while = True" loop helps us to keep the program running and not closing after just performing single operation..My calculator is good and I might try and learn how to add buttons etc..
Now I want to know am I going in the correct path? Can you help me by providing me some exercise programs for me to work out and also anything that you feel that I should know..
Thank you very much for all the timely help you people give out!