Pandas 1.4.2 upgrade casuing FFlask: array(0.78015261) (0d array) is not JSON serializable at the moment
We upgraded to a newer version of python with new pandas, numpy, etc. The versions are now:
- python=3.10.4
- pandas=1.4.2
- numpy=1.22.3
In previous versions, the error never occurred. When I do some debugging, it's not that the .to_json()
is wrong or fails, it's that the pd.DataFrame([my_result])
doesn't return correctly.
The code is this:
// Used in debugging and doesn't return properly
dataframe_test = pd.DataFrame([my_result])
// Error gets thrown here
return pd.DataFrame([my_result]).to_json()
The dataframe_test
looks like this when I go to view it in Data Viewer within VS Code (it also keeps processing as the data seems to still be running in VS Code - the bar above the viewer is indicating that its still trying to run/process):
The my_result
variable looks like this upon entering the dataframe:
I am not sure what exactly is causing the issue and not sure how to debug in Pandas to see what is happening. Any ideas?
do you know?
how many words do you know
See also questions close to this topic
-
Python File Tagging System does not retrieve nested dictionaries in dictionary
I am building a file tagging system using Python. The idea is simple. Given a directory of files (and files within subdirectories), I want to filter them out using a filter input and tag those files with a word or a phrase.
If I got the following contents in my current directory:
data/ budget.xls world_building_budget.txt a.txt b.exe hello_world.dat world_builder.spec
and I execute the following command in the shell:
py -3 tag_tool.py -filter=world -tag="World-Building Tool"
My output will be:
These files were tagged with "World-Building Tool": data/ world_building_budget.txt hello_world.dat world_builder.spec
My current output isn't exactly like this but basically, I am converting all files and files within subdirectories into a single dictionary like this:
def fs_tree_to_dict(path_): file_token = '' for root, dirs, files in os.walk(path_): tree = {d: fs_tree_to_dict(os.path.join(root, d)) for d in dirs} tree.update({f: file_token for f in files}) return tree
Right now, my dictionary looks like this:
key:''
.In the following function, I am turning the empty values
''
into empty lists (to hold my tags):def empty_str_to_list(d): for k,v in d.items(): if v == '': d[k] = [] elif isinstance(v, dict): empty_str_to_list(v)
When I run my entire code, this is my output:
hello_world.dat ['World-Building Tool'] world_builder.spec ['World-Building Tool']
But it does not see
data/world_building_budget.txt
. This is the full dictionary:{'data': {'world_building_budget.txt': []}, 'a.txt': [], 'hello_world.dat': [], 'b.exe': [], 'world_builder.spec': []}
This is my full code:
import os, argparse def fs_tree_to_dict(path_): file_token = '' for root, dirs, files in os.walk(path_): tree = {d: fs_tree_to_dict(os.path.join(root, d)) for d in dirs} tree.update({f: file_token for f in files}) return tree def empty_str_to_list(d): for k, v in d.items(): if v == '': d[k] = [] elif isinstance(v, dict): empty_str_to_list(v) parser = argparse.ArgumentParser(description="Just an example", formatter_class=argparse.ArgumentDefaultsHelpFormatter) parser.add_argument("--filter", action="store", help="keyword to filter files") parser.add_argument("--tag", action="store", help="a tag phrase to attach to a file") parser.add_argument("--get_tagged", action="store", help="retrieve files matching an existing tag") args = parser.parse_args() filter = args.filter tag = args.tag get_tagged = args.get_tagged current_dir = os.getcwd() files_dict = fs_tree_to_dict(current_dir) empty_str_to_list(files_dict) for k, v in files_dict.items(): if filter in k: if v == []: v.append(tag) print(k, v) elif isinstance(v, dict): empty_str_to_list(v) if get_tagged in v: print(k, v)
-
Actaully i am working on a project and in it, it is showing no module name pip_internal plz help me for the same. I am using pycharm(conda interpreter
File "C:\Users\pjain\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\pjain\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Users\pjain\AppData\Local\Programs\Python\Python310\Scripts\pip.exe\__main__.py", line 4, in <module> File "C:\Users\pjain\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\__init__.py", line 4, in <module> from pip_internal.utils import _log
I am using pycharm with conda interpreter.
-
Looping the function if the input is not string
I'm new to python (first of all) I have a homework to do a function about checking if an item exists in a dictionary or not.
inventory = {"apple" : 50, "orange" : 50, "pineapple" : 70, "strawberry" : 30} def check_item(): x = input("Enter the fruit's name: ") if not x.isalpha(): print("Error! You need to type the name of the fruit") elif x in inventory: print("Fruit found:", x) print("Inventory available:", inventory[x],"KG") else: print("Fruit not found") check_item()
I want the function to loop again only if the input written is not string. I've tried to type return Under print("Error! You need to type the name of the fruit") but didn't work. Help
-
Any efficient way to compare two dataframes and append new entries in pandas?
I have new files which I want to add them to historical table, before that, I need to check new file with historical table by comparing its two column in particular, one is
state
and another one isdate
column. First, I need to checkmax (state, date)
, then check those entries withmax(state, date)
in historical table; if they are not historical table, then append them, otherwise do nothing. I tried to do this in pandas bygroup-by
on new file and historical table and do comparison, if any new entries from new file that not in historical data, then add them. Now I have issues to append new values to historical table correctly in pandas. Does anyone have quick thoughts?My current attempt:
import pandas as pd src_df=pd.read_csv("https://raw.githubusercontent.com/adamFlyn/test_rl/main/src_df.csv") hist_df=pd.read_csv("https://raw.githubusercontent.com/adamFlyn/test_rl/main/historical_df.csv") picked_rows = src_df.loc[src_df.groupby('state')['yyyy_mm'].idxmax()]
I want to check
picked_rows
inhist_df
where I need to check bystate
andyyyy_mm
columns, so only add entries frompicked_rows
wherestate
hasmax
value or recent dates. I created desired output below. I tried inner join orpandas.concat
but it is not giving me correct out. Does anyone have any ideas on this?Here is my desired output that I want to get:
import pandas as pd desired_output=pd.read_csv("https://raw.githubusercontent.com/adamFlyn/test_rl/main/output_df.csv")
-
How to bring data frame into single column from multiple columns in python
I have data format in these multiple columns. So I want to bring all 4 columns of data into a single column.
YEAR Month pcp1 pcp2 pcp3 pcp4 1984 1 0 0 0 0 1984 2 1.2 0 0 0 1984 3 0 0 0 0 1984 4 0 0 0 0 1984 5 0 0 0 0 1984 6 0 0 0 1.6 1984 7 3 3 9.2 3.2 1984 8 6.2 27.1 5.4 0 1984 9 0 0 0 0 1984 10 0 0 0 0 1984 11 0 0 0 0 1984 12 0 0 0 0
-
Exclude Japanese Stopwords from File
I am trying to remove Japanese stopwords from a text corpus from twitter. Unfortunately the frequently used nltk does not contain Japanese, so I had to figure out a different way.
This is my MWE:
import urllib from urllib.request import urlopen import MeCab import re # slothlib slothlib_path = "http://svn.sourceforge.jp/svnroot/slothlib/CSharp/Version1/SlothLib/NLP/Filter/StopWord/word/Japanese.txt" sloth_file = urllib.request.urlopen(slothlib_path) # stopwordsiso iso_path = "https://raw.githubusercontent.com/stopwords-iso/stopwords-ja/master/stopwords-ja.txt" iso_file = urllib.request.urlopen(iso_path) stopwords = [line.decode("utf-8").strip() for line in iso_file] stopwords = [ss for ss in stopwords if not ss==u''] stopwords = list(set(stopwords)) text = '日本語の自然言語処理は本当にしんどい、と彼は十回言った。' tagger = MeCab.Tagger("-Owakati") tok_text = tagger.parse(text) ws = re.compile(" ") words = [word for word in ws.split(tok_text)] if words[-1] == u"\n": words = words[:-1] ws = [w for w in words if w not in stopwords] print(words) print(ws)
Successfully Completed: It does give out the original tokenized text as well as the one without stopwords
['日本語', 'の', '自然', '言語', '処理', 'は', '本当に', 'しんどい', '、', 'と', '彼', 'は', '十', '回', '言っ', 'た', '。'] ['日本語', '自然', '言語', '処理', '本当に', 'しんどい', '、', '十', '回', '言っ', '。']
There is still 2 issues I am facing though:
a) Is it possible to have 2 stopword lists regarded? namely
iso_file
andsloth_file
? so if the word is either a stopword fromiso_file
orsloth_file
it will be removed? (I tried to use line 14 asstopwords = [line.decode("utf-8").strip() for line in zip('iso_file','sloth_file')]
but received an error as tuple attributes may not be decodedb) The ultimate goal would be to generate a new text file in which all stopwords are removed.
I had created this MWE
### first clean twitter csv import pandas as pd import re import emoji df = pd.read_csv("input.csv") def cleaner(tweet): tweet = re.sub(r"@[^\s]+","",tweet) #Remove @username tweet = re.sub(r"(?:\@|http?\://|https?\://|www)\S+|\\n","", tweet) #Remove http links & \n tweet = " ".join(tweet.split()) tweet = ''.join(c for c in tweet if c not in emoji.UNICODE_EMOJI) #Remove Emojis tweet = tweet.replace("#", "").replace("_", " ") #Remove hashtag sign but keep the text return tweet df['text'] = df['text'].map(lambda x: cleaner(x)) df['text'].to_csv(r'cleaned.txt', header=None, index=None, sep='\t', mode='a') ### remove stopwords import urllib from urllib.request import urlopen import MeCab import re # slothlib slothlib_path = "http://svn.sourceforge.jp/svnroot/slothlib/CSharp/Version1/SlothLib/NLP/Filter/StopWord/word/Japanese.txt" sloth_file = urllib.request.urlopen(slothlib_path) #stopwordsiso iso_path = "https://raw.githubusercontent.com/stopwords-iso/stopwords-ja/master/stopwords-ja.txt" iso_file = urllib.request.urlopen(iso_path) stopwords = [line.decode("utf-8").strip() for line in iso_file] stopwords = [ss for ss in stopwords if not ss==u''] stopwords = list(set(stopwords)) with open("cleaned.txt",encoding='utf8') as f: cleanedlist = f.readlines() cleanedlist = list(set(cleanedlist)) tagger = MeCab.Tagger("-Owakati") tok_text = tagger.parse(cleanedlist) ws = re.compile(" ") words = [word for word in ws.split(tok_text)] if words[-1] == u"\n": words = words[:-1] ws = [w for w in words if w not in stopwords] print(words) print(ws)
While it works for the simple input text in the first MWE, for the MWE I just stated I get the error
in method 'Tagger_parse', argument 2 of type 'char const *' Additional information: Wrong number or type of arguments for overloaded function 'Tagger_parse'. Possible C/C++ prototypes are: MeCab::Tagger::parse(MeCab::Lattice *) const MeCab::Tagger::parse(char const *)
for this line:
tok_text = tagger.parse(cleanedlist)
So I assume I will need to make amendments to thecleanedlist
?I have uploaded the cleaned.txt on github for reproducing the issue: [txt on github][1]
Also: How would I be able to get the tokenized list that excludes stopwords back to a text format like cleaned.txt? Would it be possible to for this purpose create a df of ws? Or might there even be a more simple way?
Sorry for the long request, I tried a lot and tried to make it as easy as possible to understand what I'm driving at :-)
Thank you very much! [1]: https://gist.github.com/yin-ori/1756f6236944e458fdbc4a4aa8f85a2c
-
TypeError: 'float' object cannot be interpreted as an integer on linspace
TypeError Traceback (most recent call last) d:\website\SpeechProcessForMachineLearning-master\SpeechProcessForMachineLearning-master\speech_process.ipynb Cell 15' in <cell line: 1>() -->1 plot_freq(signal, sample_rate) d:\website\SpeechProcessForMachineLearning-master\SpeechProcessForMachineLearning-master\speech_process.ipynb Cell 10' in plot_freq(signal, sample_rate, fft_size) 2 def plot_freq(signal, sample_rate, fft_size=512): 3 xf = np.fft.rfft(signal, fft_size) / fft_size ----> 4 freq = np.linspace(0, sample_rate/2, fft_size/2 + 1) 5 xfp = 20 * np.log10(np.clip(np.abs(xf), 1e-20, 1e100)) 6 plt.figure(figsize=(20, 5)) File <__array_function__ internals>:5, in linspace(*args, **kwargs) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\core\function_base.py:120, in linspace(start, stop, num, endpoint, retstep, dtype, axis) 23 @array_function_dispatch(_linspace_dispatcher) 24 def linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None, 25 axis=0): 26 """ 27 Return evenly spaced numbers over a specified interval. 28 (...) 118 119 """ --> 120 num = operator.index(num) 121 if num < 0: 122 raise ValueError("Number of samples, %s, must be non-negative." % num) TypeError: 'float' object cannot be interpreted as an integer
What solution about this problem?
-
How to convert a array of string to array of float using
I have a array as shown below, in that I am having a variable called
f
, I need to assign the some value for this variablef
say 2 and convert this into a floating point array.(['0.', '0.', '-6.190649155150273*^15 + 0.7747634892904517*f^2', '2.2098855503598858*^10 + 4.250697125128597*^-7*f^2', '0.', '0.', '0.', 0.0, 0.0, -1427.5184297531378], dtype=object)
I tried with
df.convert_objects(convert_numeric=True)
but it is not working -
Images with Firebase and Python Flask API
I am currently developing an API using Firebase from google and Python's Flask libraries. It is a proyect where I am in need of saving images to the DB and then adress them in the API. I would also like to know how to relate the image to an item in the database, say posting an image of Juan, and that is linked with ALL the information from Juan inside the DB. Thanks!
-
Flask select which form to POST by button click
I'm trying to have only one of two forms POST depending on which button from a btn-group is selected. Currently all of the forms POST with no issue, but there are two forms where only one or the other value is needed. I've unsuccessfully tried to parse the not needed value out at the app.py, so I decided to try and make this so only one or the other value gets posted.
Here is the code from the .html where I'm having trouble, it's a fieldset from a larger form the rest of which is working for now.
<fieldset class="row mb-3, container" id="program_value_form_id"> <legend for="value_range" class="col-sm-2 col-form-label">Value Range:</legend> <p> <div class="btn-group, com-sm-1" role="group" aria-label="Basic radio toggle button group"> <input type="radio" onchange="swapConfig(this)" class="btn-check" name="btnradio_valuer" id="btnradio_value1" autocomplete="off" value="valuerange1" checked> <label class="btn btn-outline-primary" for="btnradio_value1">100-200</label> <input type="radio" onchange="swapConfig(this)" class="btn-check" name="btnradio_valuer" id="btnradio_valuer2" autocomplete="off" value="valuerange2"> <label class="btn btn-outline-primary" for="btnradio_valuer2">400-500mhz</label> </div> </p> <div id="btnradio_valuer1Swap"> <label for="value" class="col-sm-2 col-form-label">Value:</label> <p> <div class="col-sm-4"> <input id="value1" type="number" class="form-control" placeholder="xxx.xxx 100-200" name="value1" step="0.001" min="100" max="200"> <span class="validity"></span> </div> </p> </div> <div id="btnradio_valuer2Swap" style="display:none"> <label for="value" class="col-sm-2 col-form-label">Value:</label> <p> <div class="col-sm-4"> <input id="value2" type="number" class="form-control" placeholder="xxx.xxx 400-500" name="value2" step="0.001" min="400" max="500"> <span class="validity"></span> </div> </p> </div> </fieldset>
The forms swap depending on button click. Here is the js for that I got from on here to swap them.
<script> function swapConfig(x) { var radioName = document.getElementsByName(x.name); for(i = 0 ; i < radioName.length; i++){ document.getElementById(radioName[i].id.concat("Swap")).style.display="none"; } document.getElementById(x.id.concat("Swap")).style.display="initial"; } </script>
I have tried if statements and if's inside of for's, none have worked. In frustration I've deleted them, but I could try and rewrite them again if they are needed though I wouldn't expect much from them since my html experience is limited. Please let me know if there needs to be any corrections to what I've written or if there is a better way or place to do what I am trying to do.
-
Trying to run Python from Terminal after Homebrew 3.10 install
I used the Homebrew command
brew install python@3.10
to install Python3.10 on my Mac. However, when I'm in Terminal and typepython
and then press Tab, it is only giving me the option for the Python 3 that's located in my /usr/bin/ How do I enter the 3.10 interpreter that's located in /opt/homebrew/Cellar/ -
Unable to load libmodelpackage. Cannot make save spec
Getting this error when I was trying to convert the Keras model to the core ml model
import coremltools as ct coreml_model = ct.converters.convert(model)
Here are the version of the libs I use:
Keras 2.8.0 Tensorflow 2.8.0 coremltools 5.2.0 python 3.10.4 ubuntu 22.04
Error:
2022-05-05 15:09:04.213669: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.213796: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1 2022-05-05 15:09:04.213850: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2022-05-05 15:09:04.214045: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.214141: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.214227: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.214363: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.214445: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.214500: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7269 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:01:00.0, compute capability: 8.6 2022-05-05 15:09:04.215812: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1164] Optimization results for grappler item: graph_to_optimize function_optimizer: function_optimizer did nothing. time = 0.003ms. function_optimizer: function_optimizer did nothing. time = 0ms. 2022-05-05 15:09:04.253705: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.253780: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1 2022-05-05 15:09:04.253824: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2022-05-05 15:09:04.253982: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.254070: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.254149: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.254262: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.254344: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.254399: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7269 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:01:00.0, compute capability: 8.6 2022-05-05 15:09:04.261510: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1164] Optimization results for grappler item: graph_to_optimize constant_folding: Graph size after: 42 nodes (-14), 55 edges (-14), time = 2.991ms. dependency_optimizer: Graph size after: 41 nodes (-1), 40 edges (-15), time = 0.306ms. debug_stripper: debug_stripper did nothing. time = 0.005ms. constant_folding: Graph size after: 41 nodes (0), 40 edges (0), time = 1.042ms. dependency_optimizer: Graph size after: 41 nodes (0), 40 edges (0), time = 0.229ms. debug_stripper: debug_stripper did nothing. time = 0.004ms. 2022-05-05 15:09:04.283289: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.283357: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1 2022-05-05 15:09:04.283393: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2022-05-05 15:09:04.283593: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.283681: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.283759: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.283864: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.283944: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.283998: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7269 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:01:00.0, compute capability: 8.6 2022-05-05 15:09:04.285078: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1164] Optimization results for grappler item: graph_to_optimize function_optimizer: function_optimizer did nothing. time = 0.003ms. function_optimizer: function_optimizer did nothing. time = 0ms. 2022-05-05 15:09:04.318159: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.318237: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1 2022-05-05 15:09:04.318283: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2022-05-05 15:09:04.318470: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.318559: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.318637: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.318745: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.318827: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.318882: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7269 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:01:00.0, compute capability: 8.6 2022-05-05 15:09:04.328088: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1164] Optimization results for grappler item: graph_to_optimize constant_folding: Graph size after: 42 nodes (-14), 55 edges (-14), time = 4.112ms. dependency_optimizer: Graph size after: 41 nodes (-1), 40 edges (-15), time = 0.282ms. debug_stripper: debug_stripper did nothing. time = 0.004ms. constant_folding: Graph size after: 41 nodes (0), 40 edges (0), time = 1.075ms. dependency_optimizer: Graph size after: 41 nodes (0), 40 edges (0), time = 0.227ms. debug_stripper: debug_stripper did nothing. time = 0.003ms. Running TensorFlow Graph Passes: 0%| | 0/6 [00:00<?, ? passes/s]2022-05-05 15:09:04.587585: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.587766: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.587860: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.588042: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.588165: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-05-05 15:09:04.588249: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7269 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:01:00.0, compute capability: 8.6 Running TensorFlow Graph Passes: 100%|█████████████████████████████████████████| 6/6 [00:00<00:00, 51.88 passes/s] Converting Frontend ==> MIL Ops: 100%|████████████████████████████████████████| 41/41 [00:00<00:00, 2051.40 ops/s] Running MIL Common passes: 100%|███████████████████████████████████████████| 34/34 [00:00<00:00, 1104.92 passes/s] Running MIL Clean up passes: 100%|████████████████████████████████████████████| 9/9 [00:00<00:00, 682.35 passes/s] Translating MIL ==> NeuralNetwork Ops: 100%|██████████████████████████████████| 67/67 [00:00<00:00, 1622.07 ops/s] --------------------------------------------------------------------------- Exception Traceback (most recent call last) Input In [34], in <cell line: 2>() 1 import coremltools as ct ----> 2 coreml_model = ct.converters.convert(model) File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/converters/_converters_entry.py:352, in convert(model, source, inputs, outputs, classifier_config, minimum_deployment_target, convert_to, compute_precision, skip_model_load, compute_units, useCPUOnly, package_dir, debug) 349 if ext != _MLPACKAGE_EXTENSION: 350 raise Exception("If package_dir is provided, it must have extension {} (not {})".format(_MLPACKAGE_EXTENSION, ext)) --> 352 mlmodel = mil_convert( 353 model, 354 convert_from=exact_source, 355 convert_to=exact_target, 356 inputs=inputs, 357 outputs=outputs, 358 classifier_config=classifier_config, 359 transforms=tuple(transforms), 360 skip_model_load=skip_model_load, 361 compute_units=compute_units, 362 package_dir=package_dir, 363 debug=debug, 364 ) 366 if exact_target == 'milinternal': 367 return mlmodel # Returns the MIL program File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/converters/mil/converter.py:183, in mil_convert(model, convert_from, convert_to, compute_units, **kwargs) 144 @_profile 145 def mil_convert( 146 model, (...) 150 **kwargs 151 ): 152 """ 153 Convert model from a specified frontend `convert_from` to a specified 154 converter backend `convert_to`. (...) 181 See `coremltools.converters.convert` 182 """ --> 183 return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs) File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/converters/mil/converter.py:231, in _mil_convert(model, convert_from, convert_to, registry, modelClass, compute_units, **kwargs) 224 package_path = _create_mlpackage(proto, weights_dir, kwargs.get("package_dir")) 225 return modelClass(package_path, 226 is_temp_package=not kwargs.get('package_dir'), 227 mil_program=mil_program, 228 skip_model_load=kwargs.get('skip_model_load', False), 229 compute_units=compute_units) --> 231 return modelClass(proto, 232 mil_program=mil_program, 233 skip_model_load=kwargs.get('skip_model_load', False), 234 compute_units=compute_units) File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/models/model.py:346, in MLModel.__init__(self, model, useCPUOnly, is_temp_package, mil_program, skip_model_load, compute_units, weights_dir) 343 filename = _tempfile.mktemp(suffix=_MLMODEL_EXTENSION) 344 _save_spec(model, filename) --> 346 self.__proxy__, self._spec, self._framework_error = _get_proxy_and_spec( 347 filename, compute_units, skip_model_load=skip_model_load, 348 ) 349 try: 350 _os.remove(filename) File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/models/model.py:123, in _get_proxy_and_spec(filename, compute_units, skip_model_load) 120 _MLModelProxy = None 122 filename = _os.path.expanduser(filename) --> 123 specification = _load_spec(filename) 125 if _MLModelProxy and not skip_model_load: 126 127 # check if the version is supported 128 engine_version = _MLModelProxy.maximum_supported_specification_version() File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/models/utils.py:210, in load_spec(filename) 184 """ 185 Load a protobuf model specification from file. 186 (...) 207 save_spec 208 """ 209 if _ModelPackage is None: --> 210 raise Exception( 211 "Unable to load libmodelpackage. Cannot make save spec." 212 ) 214 spec = _Model_pb2.Model() 216 specfile = filename Exception: Unable to load libmodelpackage. Cannot make save spec.