exec format error (llvm-config) when installing numba on raspberry pi 3
I installed the aarch64 prebuilts for LLVM 5.0 on my raspeberry pi 3 and then did a 'pip3 install numba', pointing LLVM_CONFIG at the llvm-config executable. I get a exec format error when the install tries to execute llvm-config.
root@raspberrypi:/tmp# file /root/clang+llvm-5.0.0-aarch64-linux- gnu/bin/llvm-config /root/clang+llvm-5.0.0-aarch64-linux-gnu/bin/llvm-config: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, for GNU/Linux 3.7.0, not stripped root@raspberrypi:/tmp# echo $LLVM_CONFIG /root/clang+llvm-5.0.0-aarch64-linux-gnu/bin/llvm-config root@raspberrypi:/tmp# /root/clang+llvm-5.0.0-aarch64-linux- gnu/bin/llvm-config bash: /root/clang+llvm-5.0.0-aarch64-linux-gnu/bin/llvm-config: cannot execute binary file: Exec format error
What am I doing wrong? Is aarch64 the wrong prebuilts to download for the pi3 ?
See also questions close to this topic
How to get new and modified data from two different sources in Python
I have large datasets from 2 sources, one is a huge csv file and the other coming from a txt file. Source 1 (CSV file):
id,name,address 00001,Jack,LA 00002,Jayce,CA 00003,Tony,GA 00004,Katar,CA 00005,Henry,GA
Source 2 (txt file)
00001|Jack |CA 00002|Jayce|CA 00003|Tony |LA 00004|Katar|CA 00005|Henry|HI 00006|Darick|GA
The output of the script I want is something like:
00001|Jack |CA 00003|Tony |LA 00005|Henry|HI 00006|Darick|GA
The CSV file is a historical data, and the txt file is a new data. I want to compare and get only the new and modified data. In my case, the data in those files are really big (up to 1000000 lines) and I have no idea to do. Could anyone can help me?
yesterday at hours:minute to datetime
I have datetime like : Yesterday at 10:50 PM, and I try to convert it to python datetime, I've already tried to use
timedelta(days=1)and change the hours and minute but is there any better way to do that ?
I'm trying to display an image in Python 3.7.0 using Jupyter Notebook. Why isn't it working?
The image is called "pythontest.png". I'm trying to display the image using the following code:
from IPython.display import Image img = 'pythontest.png' Image(url=img) print(Image)
but when I run the program it simply displays:
What does that mean and how do I make it display the actual image? Thanks.
Latest CMake and LLVM on Windows 10
latest LLVM is 7.0 and it is working quite well on Windows 10 x64, building native executables etc.
latest CMake is 3.12.x.
I have VS 2017 Pro installed as well.
Downloaded them both and tried to make simple project with it on Windows, and it didn't work, even if I set CC/CXX, linker pointing to lld, failing on compiling test problem, not finding rc (resource compiler).
Tried targeting GNU make as well as Ninja as build system.
Is this a supported configuration? If yes, how to make it work?
Basically, I would like to use CMake/LLVM with editor/terminal like I'm doing it on Linux
ld verification of load/store fails when using LTO but not much info is provided
After updating to Xcode 10 our C++ codebase does not link when built with -Os and -flto. The following error is provided:
ld: Explicit load/store type does not match pointee type of pointer operand (Producer: 'APPLE_1_1000.11.45.2_0' Reader: 'LLVM APPLE_1_1000.11.45.2_0') for architecture x86_64
(the same error occurs on the latest Xcode 10.1 Beta 3)
The same code builds fine with Xcode 9. Sadly the linker does not provide any more info than spitting out the above error message. Some info about the object file would be helpful in trying to pinpoint the exact source of the problem. Removing -flto eliminates the error…
Does anyone have any debugging suggestions/ideas? We've tried to use "--trace" with ld to get more info on the files being processed but the error message just gets outputted in the middle of the trace with no apparent correlation between the error and the input file being printed at that moment.
This all smells very much of a compiler error and I've reported this to Apple via the bug tracker.
Any extra help would be greatly appreciated. Thanks
I can't call function in LLVM Tutorial: Table of Contents Chapter6 code
I compiled my toy.cpp in llvm/examples/Kaleidoscope/Chapter6/toy.cpp
Compile Success But executed my code
ready> extern printd(x); ready> Read exxtern: declare double @printd(double) ready> printd(40); ready> Failure value returned from cantFail wrapped call UNREACHABLE executed at /usr/local/include/llvm/Support/Error.h:716!
I can't call my function
how to resolve GPU compute capability error in numba(cuda) library
I am starting using Numba(Cuda) packages for huge matrix(and vector) multiplications in my python codes. I have faced this error:
numba.cuda.cudadrv.error.NvvmSupportError: GPU compute capability 2.1 is not supported (requires >=2.0)
And I have NVIDIA GeForce GT 720M on my machine. Any idea how can I resolve this problem?
Programmatic nested numba.cuda function calls
Numba & CUDA noob here. I'd like to be able to have one
numba.cudafunction programmatically call another one from the device, without having to pass any data back to the host. For example, given the setup
from numba import cuda @cuda.jit('int32(int32)', device=True) def a(x): return x+1 @cuda.jit('int32(int32)', device=True) def b(x): return 2*x
I'd like to be able to define a composition kernel function like
@cuda.jit('void(int32, __device__, int32)') def b_comp(x, inner, result): y = inner(x) result = b(y)
and successfully obtain
b_comp(1, a, result) assert result == 4
Ideally I'd like
b_compto accept varying function arguments after it compiles [e.g. after the above call, to still accept
b_comp(1, b, result)] -- but a solution where the function arguments become fixed at compile time will still work for me.
From what I've read, it seems that CUDA supports passing function pointers. This post suggests that
numba.cudahas no such support, but the post isn't convincing, and is also a year old. The page for supported Python in numba.cuda doesn't mention function pointer support. But it links to the supported Python in numba page, which makes it clear that
numba.jit()does support functions as arguments, although they get fixed at compile time. If
numba.cuda.jit()does the same, like I said above, that'll work. In that case, when specifying the signature for
comp, how should I state the variable type? Or could I use
numbadoesn't support any such direct approach, is metaprogramming a reasonable option? E.g. once I know the
innerfunction, my script could create a new script containing a python function that composes those specific functions, and then apply
numba.cuda.jit(), and then import the result. It seems convoluted, but it's the only other
numba-based option I could think of.
numbawon't do the trick at all, or at least not without serious cludgery, I'd be happy with an answer that gave a few details, plus a rec like "switch to PyCuda".
Perfomance of cython vs numba
Hey I am currently working in a Python's module for thermodynamic fluid phase equilibria. For this I need to program activity coefficient models, as NRTL, that involves several summations. In order to enhance the perfomance of the module I tried to jit the function with numba:
@jit(cache=True) def NRTL(X,T,g, alpha, g1): ''' NRTL activity coefficient model. input X: array like, vector of molar fractions T: float, absolute temperature in K. g: array like, matrix of energy interactions in K. g1: array_like, matrix of energy interactions in K^2 alpha: float, aleatory factor. tau = ((g + g1/T)/T) output lngama: array_like, natural logarithm of activify coefficient ''' tau = g + g1*T tau /= T nc=len(X) G=np.exp(-alpha*tau) lngama=np.zeros_like(X) for i in range(nc): SumC=SumD=SumE=0 for j in range(nc): A=X[j]*G[i,j] SumA=SumB=0 for k in range(nc): SumA +=X[k]*G[k,j] SumB +=X[k]*G[k,j]*tau[k,j] SumC +=A/SumA*(tau[i,j]-SumB/SumA) SumD+=X[j]*G[j,i]*tau[j,i] SumE+=X[j]*G[j,i] lngama[i]=SumD/SumE+SumC return lngama
I was trying new options, as cython, but I am not getting as good perfomance as with numbas's jit.
import numpy as np cimport numpy as np cimport cython @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double[:] nrtlaux(double [:] X, double [:,::1] G, double [:,::1] tau, int nc): cdef int i, j, k cdef double A, SumA, SumB, SumC, SumD, SumE, aux1, aux2 cdef double [:] lngama = np.zeros(nc) for i in range(nc): SumC = SumD = SumE = 0. for j in range(nc): A = X[j]*G[i,j] SumA = SumB = 0. for k in range(nc): aux1 = X[k]*G[k,j] SumA += aux1 SumB += aux1*tau[k,j] SumC += A/SumA*(tau[i,j]-SumB/SumA) aux2 = X[j]*G[j,i] SumD += aux2*tau[j,i] SumE += aux2 lngama[i] = SumD/SumE+SumC return lngama def NRTL(np.ndarray[double, ndim=1] X, double T, np.ndarray[double, ndim=2] g, np.ndarray[double, ndim=2] alpha, np.ndarray[double, ndim=2] g1): cdef int nc = len(X) cdef: double[:,::1] tau = (g/T + g1) double[:,::1] G = np.exp( -alpha * tau ) lngama = nrtlaux(X, G, tau, nc) return np.asarray(lngama)
I use the following parameters to evaluate the function:
X = np.array([0.5,0.4,0.1]) g = np.array([[0,35.00002657,463.719316],[341.00001923,0,96.02154497],[1194.42262, 534.77089478,0]]) alpha = np.array([[0,0.3456916919878884,0.242020522],[0.3456916919878884,0,0.54 ],[0.242020522,0.54 ,0]]) g1 = np.zeros_like(g) T = 350.
And I got the following results:
%timeit NRTL(X,T,g,alpha, g1) #cython 13.9 µs ± 489 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) %timeit nrtltp(X,T,g,alpha, g1) #numba 1.82 µs ± 35 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
I am kind of surprise of the good results from the jitted function, also I am beginner with cython so I was hoping for any suggestions to improve the perfomance?
librosa: installs properly only with clone git AND cannot use any functions though it imports
I'm quite a newbie with python, and programming in general, and I am currently struggeling with installing and using the library librosa. I thought I suceeded installing it with
git clone https://github.com/librosa/librosa.git librosa
and also with installing numpy and scipy seperately, again with
git clone https://github.com/numpy/numpy.git numpy git clone https://github.com/scipy/scipy.git scipy
and it seemed to finally work, I could also
without any problems, but as I tried to use
librosa.load(pathfile, y, sr)
filename = librosa.util.example_audio_file()
I get the error message:
Traceback (most recently call last): File "home/pi/new version.py", line 17, in <module> slowbeat_lib = librosa.load('home/pi/gpio-music-box/samples/slowbeat.ogg', y, sr=None) Attribute Error: module 'librosa'has no attribute 'load'
the same with
So, I was thinking that I probably didn't install it completely, or in the right directory, cause it is not in the usr/lib, but in home/pi/...
I tried to change that, but failed. Also installing it with
pip install sudo pip install
never worked out, because it always failed to build wheels for several side packages such as numpy, scipy, llvmlite,... --> that's also quite weird, right?
Or could the problem be something totally different?
so actually I am quite helpless, and thankful for any hint or advice! :)
- problem while installing python package in spyder
How to extract audio after particular sound?
Let's say I have a few very long audio files (for ex., radio recordings). I need to extract 5 seconds after particular sound (for ex., ad start sound) from each file. Each file may contain 3-5 such sounds, so I should get *(3-5)number of source files result files.