No such package 'slim': BUILD file not found - Tensorflow
Currently trying to run Google's research, I come into the error below when following the instructions provided. Instruction on Github - Mint Linux system with Anaconda/Tensorflow download. This is the second step in "Getting the Datasets"
~/test/models/research $ bazel run domain_adaptation/datasets:download_and_convert_mnist_m -- --dataset_dir $DSN_DATA_DIR ERROR: /home/.../models/research/domain_adaptation/datasets/BUILD:29:1: no such package 'slim': BUILD file not found on package path and referenced by '//research/domain_adaptation/datasets:download_and_convert_mnist_m' ERROR: Analysis of target '//research/domain_adaptation/datasets:download_and_convert_mnist_m' failed; build aborted: no such package 'slim': BUILD file not found on package path INFO: Elapsed time: 0.168s
However, this change has not been reflected in the
BUILDfiles for https://github.com/tensorflow/models/tree/master/research/domain_adaptation. You will need to prepend
BUILDtargets such as this, this, and this (i.e. from
//research/slim:mnist), therefore referencing the correct BUILD file for
See also questions close to this topic
Raspberry Pi Zero W - Detect iBeacon and perform an action
I have a Kontakt.io Beacon Pro - this broadcasts iBeacons. I want to be able to detect the iBeacon using the Raspberry Pi (Zero W) and then for the pi to perform an action using a python script. (turn on LEDs via GPIO). I can detect the iBeacon using the hcitool lescan feature of bluez, but i don't know how (if i can) set up a python script maybe? that will detect the ibeacon and then upon detecting it, will turn the LEDs on.
Setting parameters in pyomo
I am using
pyomo. I would like to set the parameter
mip.limits.solutions = 1. How to do this with either
.set_options(or any other way?
I have tried the following but nothing works:
from pyomo.environ import * opt = SolverFactory("cplex") opt.set_options('miplimitssolutions=1') # does not work opt.set_options('mip.limits.solutions=1') # does not work opt.options['mip'] = 'limits' # this works up to here but how to continue?
decoding entities for Element tree
Is there a comprehensive way to find HTML entities (including foreign language characters) and convert them to hexidecimal encoding or another encoding type that is accepted by ElementTree? Is there a best practice for this?
I'm parsing a large data set of XML, which used HTML entities to encode unicode and special characters. My script passes in an XML file line by line. When I parse the data using python ElementTree, I get the following error.
ParseError: undefined entity: line 296, column 29
I have started by building a dictionary to parse the string and encode into hexidecimal. This has alleviated many of the errors. For example, converting the trademark symbol
™. However, there is no end in sight. This is because I have started to find unicode escaped characters such as 'Å' and 'ö' which are for foreign language. I have looked at several options and will describe them below.
xmlcharrefreplace: This did not find foreign language HTML escaped values.
line = line.encode('ascii', 'xmlcharrefreplace')
HTMLParser.enescape(): Did not work, i believe because XML needs some characters escaped such as '<&>'.
h = HTMLParser.HTMLParser() line = h.unescape(line)
Encoding to UTF-8: Did not work I believe because XML needs some characters escaped.
line = line.encode('utf-8')
BeautifulSoup: This returned a BeautifulSoup object and when converting to a string added an XML version tag to each line and even when replacing that, there was some other type of character additions.
line = BeautifulSoup(line, "xml") line = str(line).replace('<?xml version="1.0" encoding="utf-8"?>', "").replace("\n", "")
htmlentitydefs: Still manages to miss many characters. For example, still missed '?' and '=', however, this got me further than other options.
from htmlentitydefs import name2codepoint line = re.sub('&(%s);' % '|'.join(name2codepoint), lambda m: unichr(name2codepoint[m.group(1)]), line)
Can I Feed a Different Start Symbol to RNN Decoder?
When an RNN is used to predict a sentence, the decoder state is initialized with the input:
<S>(the start symbol, which prompts the decoder to produce the first word of the response)
The, then given
If my goal was to predict the end of a sequence given the start,
Input: "The car was" Target: "moving down the hill."
Would it make sense to feed the first decoder cell
wasshould increase the information accessible to the decoder, compared to using a static token for every example which contains no specific information.
I'm posting this question because it conversely seems like it could be necessary for the
<S>to remain static during training.
Tensorflow LSTM model parameter learning inside parameter
I'm tryinig to train my LSTM model in tensorflow and my module has to calculate parameter inside parameter. And i want to train both parameters altogether. More details are in the picture below.
I think that tensorflow LSTM module's input must be a perfect sequence and parameters like "tf.placeholder". How can i do this in tensorflow? Or can you recommend another appropriate framework better than tensorflow in this task?
How to not resize input image while running Tensorflow SSD's inference
From what I can understand from Single Shot Multibox Detector paper, it is a fully convolutional network. As such, it won't require the rescaling which tensorflow is doing (to 300x300) during inference. How can I remove this resizing during inference in tensorflow?
ImportError: No module named 'xgboost'
when I use
import xgboost as xgb from xgboost import XGBClassifier
any of the import, I am getting
ImportError: No module named xgboostfrom jupyter notebook
I have installed xgboost
Successfully installed numpy-1.14.1 scipy-1.0.0 xgboost-0.7.post3
Do I need to install any pre-installation ?
I am using linux system.
Model implementation in production with python
I built a machine learning model of binary classification in python.
It works on my laptop (e.g. command line tool). Now I want to deploy it in production on a separate server in my company. It has to take inputs from another server (C# application), make some calculations and return outputs back to it.
My question is what are the best practices of doing such thing in production? As I know it can be done through TCP/IP connection.
I am new in this field and I don't know the terms used here. So can anybody guide me?
Spark 2.2: Load org.apache.spark.ml.feature.LabeledPoint from file
The following line of code loads the (soon to be deprecated)
mllib.regression.LabeledPointfrom file to an
I'm unable to find the equivalent function for
ml.feature.LabeledPoint, which is not yet heavily used in the Spark documentation examples.
Can someone point me to the relevant function?
How to find out individul point disparity value grom bm/sgbm generated disparity Mat?
i created Disparity map using stereo pairs using both bm and Sgbm algorithms of open cv. But i want to get disparity value at a particular x and y cordinate. i tried to loop the Disparity mat and . at function but it throughout exception after that i extracted RGB information using
Point3_<uchar>* p = filtered_disp_vis.ptr<Point3_<uchar> >(f_ImageCordinate_u , f_ImageCordinate_v)
but how to convert this RGV value to disparity value? I tried in Matlab as well and i am taking index value as disparity in mat lab1
Impose to SLIC superpixel algorithm to make the exact number of segments
l'm wondering if it's possible to impose to SLIC to make superpixels with the exact number of segments
img= skimageIO.imread("first_image.jpeg") segments_slic = slic(img, n_segments=1000, compactness=0.01, sigma=1)
n_segments=1000 represents the maximum number of segments. What if l would like to get exactly 1000 segments ?
Dense optical flow on superpixeled consecutive frames
l would like to compute dense optical flow over two consecutive frames frame 1 and frame 2.
frame1=superpixeld(frame1) # Number of superpixels 596 frame2=superpixeld(frame2) # Number of superpixels 603
Here is my code :
os.chdir('/home/of') files=sorted(glob.glob("*.jpeg")) # only two frames def sp_idx(s, index=True): # get the pixel indexes and their affectation to superpixels u = np.unique(s) return [np.where(s == i) for i in u] video_index= # store the index of pixels affected to each superpixel video_superpixels= # store the RGB pixel values of the pixels of all superpixels # example with only two frames for frame in files: img= skimageIO.imread(frame) segments_slic = slic(img, n_segments=1000, compactness=0.01, sigma=1) print(np.unique(segments_slic)) superpixel_list = sp_idx(segments_slic) video_index.append(superpixel_list) superpixel= [img[idx] for idx in superpixel_list] superpixel=np.asarray(superpixel) video_superpixels.append(video_superpixels)
Now l have a superpixeled representation for frame1 and frame 2.
2) Let's compute dense optical flow on the original frames (not superpixeled)
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np from PIL import Image import time import argparse import pyflow im1 = np.array(Image.open('/home/frame1.jpeg')) im2 = np.array(Image.open('/home/frame2.jpeg')) im1 = im1.astype(float) / 255. im2 = im2.astype(float) / 255. # Flow Options: alpha = 0.012 ratio = 0.75 minWidth = 20 nOuterFPIterations = 7 nInnerFPIterations = 1 nSORIterations = 30 colType = 0 # 0 or default:RGB, 1:GRAY (but pass gray image with shape (h,w,1)) u, v, im2W = pyflow.coarse2fine_flow( im1, im2, alpha, ratio, minWidth, nOuterFPIterations, nInnerFPIterations, nSORIterations, colType) flow = np.concatenate((u[..., None], v[..., None]), axis=2)
3) Now that we have the optical flow between frame1 and frame2 and their superpixeled reprsentation, l would like to determine the new location of pixels in frame 1 in frame 2. In other term Given the superpixeled representation of frame 1 and frame2 and their optical flow how can l get the following:
A) What is the new location of each pixel of frame1 in frame 2 according to optical flow
B) Do the superpixels in frame1 preserves their pixels in frame 2 superpixels as in frame 1 ?
C) A challenging problem appear : Let say that in frame 1 , the supepixel is composed of 5 pixels :
and in frame 2
superpixel=[0,4,3,5,27,1] # most of pixels of superpixel have moved with respect to optical flow
How can l make a voting superpixels in a way that if 90% of pixels of a given superpixel in frames1 move to the same superpixel in frame 2 we add the remaining 10% to the same superpixel ?
Bazel Workspace depend on *.deb
What would be, in your opinion, the best way that a project can depend on another pre-compiled project that is distributed over debian packages?
A custom (new_)debian_package() workspace rule?
C compiler option in Bazel CROSSTOOL file
How does one set C only (not C++) compiler flags in the CROSSTOOL file in Bazel.
compiler_flagcan be used for both C and C++,
cxx_flagfor C++ code. What is the corresponding way to set C only options.
In particular I need to specify
-std=c99as an option. The only way I know of doing this right now is by passing
copts = ["-std=c99"]to every target which is messy and error prone.
bazel rules_nodejs can't be deployed using rules_k8s: wrong platform
I'm building a nodejs app from OSX and trying to deploy it to kubernetes using
It almost works, except that the node doesn't start because of this error:
/app/examples/hellohttp/nodejs/nodejs-hellohttp_image.binary: line 147: /app/examples/hellohttp/nodejs/nodejs-hellohttp_image.binary.runfiles/com_github_yourbase_yourbase/external/nodejs/bin/node: cannot execute binary file: Exec format error /app/examples/hellohttp/nodejs/nodejs-hellohttp_image.binary: line 147: /app/examples/hellohttp/nodejs/nodejs-hellohttp_image.binary.runfiles/com_github_yourbase_yourbase/external/nodejs/bin/node: Success
It looks like the
nodebinary being deployed wasn't built for the right platform where k8s is running (linux with amd64, normal Google GKE nodes).
The command I'm using to make the deployment:
bazel run --cpu=k8 //examples/hellohttp/nodejs:nodejs-hellohttp_deploy.apply
I tried with other combinations of
--cpubut I couldn't get it to work.
The argument that felt like the closest was this:
$ bazel run --platforms=@bazel_tools//platforms:linux //examples/hellohttp/nodejs:nodejs-hellohttp_deploy.apply ERROR: While resolving toolchains for target //examples/hellohttp/nodejs:nodejs-hellohttp_deploy.apply: Target constraint_value rule @bazel_tools//platforms:linux was found as the target platform, but does not provide PlatformInfo ERROR: Analysis of target '//examples/hellohttp/nodejs:nodejs-hellohttp_deploy.apply' failed; build aborted: Target constraint_value rule @bazel_tools//platforms:linux was found as the target platform, but does not provide PlatformInfo INFO: Elapsed time: 0.255s FAILED: Build did NOT complete successfully (0 packages loaded) ERROR: Build failed. Not running target
This says there was a toolchain problem, and says something about missing PlatformInfo, but it doesn't say anything about how do I fix it. :-(
For what is worth, with Go I had similar problems, and they were solved by passing
--experimental_platforms=@io_bazel_rules_go//go/toolchain:linux_amd64to the bazel run command.
How do I make something similar work for nodejs?