Gtk-WARNING **: cannot open display: localhost:10.0
I am on the new end of learning remote connections and I ran into a rather strange issue when connecting remotely to a machine.
Host: Jetson Nano - Ubuntu Client: Asus desktop - Linux Mint
I am using SSH to connect to the host machine. Once I'm in, I run my program which should open the camera that the host machine has connected via mipi connection... but it does not show a display window. Rather it displays:
Gtk-WARNING **: cannot open display: lcocalhost:10.0
CONSUMER: Done Success
(Argus)Error InvalidState: Argus client is exiting with 2 outstanding client threads
If run the program in the machine without SSH connection, it works and the display shows what the camera is capturing. I tried changing the X11forwarding
and agent
to YES, and I tried export DISPLAY=localhost:10.0
. That did not work as well.
Any help would be appreciated. Thanks, GM
1 answer
-
answered 2022-05-06 20:41
Michael Gruner
Bear in mind that many GPU-related stuff won't work without a working display. Sadly, X11 forwarding doesn't work in those cases. At this point, it is not clear if this is your case, or if it is simply that you have the wrong DISPLAY number. You may try:
- Connecting a physical monitor and keyboard to the board, opening a terminal and running
echo $DISPLAY
. (in the keyboard/monitor session, not the SSH session). Then set that on your remote session asexport DISPLAY=:X
(where X is what was printed before). - If you are using GStreamer then use nvoverlaysink which doesn't require X. You will need a monitor connected to the board though.
- Connecting a physical monitor and keyboard to the board, opening a terminal and running
do you know?
how many words do you know
See also questions close to this topic
-
Why doesn't systemd start my autossh tunnel on reboot?
I have a PC that I need to ssh into which only has a private IP (running Ubuntu 20.04 LTS). This is my first time working with autossh and systemd. I have autossh working and I can easily create a tunnel and ssh into the PC from my server (which has a public ip).
I have noticed that the ssh tunnel will randomly close despite having ServerAliveInterval 30 and ServerAliveCountMax 3 values. I have been fixing this my manually deleting the tunnel on both the PC and server, and then creating it all over again. But this is a temporary solution since ideally I would want the tunnel to come back by itself. I believe the tunnel closes to either the network dropping and coming back up, but I am not sure why. Here is the systemd service I created on the PC:
tunnel-up.service (192.168.1.111 is the fake public IP of the server)
[Unit] Wants=network-online.target After=network-online.target [Service] Type=oneshot ExecStart=autossh -M 0 -o "ExitOnForwardFailure=yes" -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -NT -o Tunnel=point-to-point -w 1:1 192.168.1.111 & ExecStart=/bin/bash /root/scripts/link-up.sh [Install] WantedBy=multi-user.target
link-up.sh
#!/bin/bash ip link set tun1 up && ip addr add 10.250.0.3/30 peer 10.250.0.4 dev tun1
I have done systemctl daemon-reload and systemctl start tunnel-up.service but when I reboot my computer the tunnel never gets created... I had the autossh command inside my link-up.sh script and when I executed the script it worked perfectly, however when it comes to running this on startup it never works. Any help would be appreciated.
Here is the output of journalctl -u tunnel-up.service
May 06 17:42:18 hmtest.ut systemd[1]: Starting tunnel-up.service... May 06 17:42:18 hmtest.ut autossh[1067]: port set to 0, monitoring disabled May 06 17:42:18 hmtest.ut autossh[1067]: starting ssh (count 1) May 06 17:42:18 hmtest.ut autossh[1067]: ssh child pid is 1077 May 06 17:42:18 hmtest.ut autossh[1077]: ssh: connect to host 192.168.1.111 port 22: Network is unreachable May 06 17:42:18 hmtest.ut autossh[1067]: ssh exited prematurely with status 255; autossh exiting May 06 17:42:18 hmtest.ut systemd[1]: tunnel-up.service: Main process exited, code=exited, status=1/FAILURE May 06 17:42:18 hmtest.ut systemd[1]: tunnel-up.service: Failed with result 'exit-code'. May 06 17:42:18 hmtest.ut systemd[1]: Failed to start tunnel-up.service.
-
Running a Python Flask server in closed Terminal session
I've made an Flask API Server that i want to run on a Server and close the Terminal session and still keep it running. (SSH) The API's makes a lot of Requests to other servers and uses Threading to make this Process faster.
I've tried the setsid command, and this works fine until i close the Terminal session. Because when its closed, i am only getting 500 errors.
-
Unable to Connect to SFTP through paramiko
I am trying to establish a connection with an SFTP with paramiko. I was able to generate the known_hosts file in my local system by using
ssh my.domain.com
The resultant file has both the host and its IP in the first line of known_hosts, like
my.domain.com,xx.xx.xxx.xx ...
When I try to connect through paramiko,
host, port = 'my.domain.com,xx.xx.xxx.xx', 22 user, pwd = "xyz", "abc" ssh = paramiko.SSHClient() ssh.connect(host, port, username=user, password=pwd)
I get the error
socket.gaierror: [Errno 11001] getaddrinfo failed
After looking this up, the solutions were to not mention user in host or add port, etc. But I'm still not able to connect. I tried removing
my.domain.com
from both the Python code and known_hosts file,host, port = 'xx.xx.xxx.xx', 22 user, pwd = "xyz", "abc" ssh = paramiko.SSHClient() ssh.connect(host, port, username=user, password=pwd)
but that didn't work. I tried removing
xx.xx.xxx.xx
from both the Python code and known_hosts file,host, port = 'xx.xx.xxx.xx', 22 user, pwd = "xyz", "abc" ssh = paramiko.SSHClient() ssh.connect(host, port, username=user, password=pwd)
but that didn't work either.
How do I connect to my SFTP?
-
Android RecyclerView displays only one raw of HeaderView
I am implementing a RecyclerView, holding itemViews of both VIEW_HEADER and VIEW_ITEM. Each header view item appears above a triplet of viewItem. When i am implementing both type of Views, i am only getting the first header in the run time of the recyclerview. When i disable the header View, i can get the results of the itemviews in sequential order.
Here is the declaration of the code for the recyclerview adapter:
package com.example.cvdriskestimator.CustomClasses import android.graphics.drawable.Drawable import android.view.LayoutInflater import android.view.View import android.view.ViewGroup import android.widget.ImageView import android.widget.RelativeLayout import android.widget.TextView import androidx.recyclerview.widget.RecyclerView import com.example.cvdriskestimator.MainActivity import com.example.cvdriskestimator.R class leaderBoardRecyclerAdapter : RecyclerView.Adapter<RecyclerView.ViewHolder>() { val VIEW_ITEM = 0 val VIEW_HEADER = 1 private lateinit var prActivity: MainActivity var nameDataSet = ArrayList<String>() var participantAvatars = ArrayList<Drawable>() private var currentPartId = 1 fun setActivity(mainActivity: MainActivity) { prActivity = mainActivity } override fun getItemViewType(position: Int): Int { var view_type = (position % 4) if (view_type == 0) return VIEW_HEADER else return VIEW_ITEM return VIEW_ITEM } override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): RecyclerView.ViewHolder { when(viewType) { 1 -> { val view : View = LayoutInflater.from(prActivity.applicationContext).inflate(R.layout.leaderboard_header_item_layout, parent , false) return leaderHeaderViewHolder(view) } 0 -> { val view : View = LayoutInflater.from(prActivity.applicationContext).inflate(R.layout.leaderboard_item_layout , parent , false) return leaderBoardViewHolder(view) } } val view = View(prActivity.applicationContext) return leaderBoardViewHolder(view) } override fun onBindViewHolder(holder: RecyclerView.ViewHolder, position: Int) { when(getItemViewType(position)) { 0 -> { //populate data for the viewHolder var leaderBoardViewHolder = holder as leaderBoardViewHolder leaderBoardViewHolder.part_name.text = nameDataSet[currentPartId - 1] leaderBoardViewHolder.part_order_id.text = (currentPartId).toString() leaderBoardViewHolder.part_avatar.setImageDrawable(participantAvatars[currentPartId-1]) currentPartId ++ leaderBoardViewHolder.part_score.text = leaderBoardViewHolder.calculateScore(currentPartId).toString() } 1 -> { var leaderHeaderViewHolder = holder as leaderHeaderViewHolder leaderHeaderViewHolder.leader_header_txtV.text = leaderHeaderViewHolder.setGroupLetter(currentPartId-1) } } } override fun getItemCount(): Int { return nameDataSet.size } } open class leaderBoardViewHolder : RecyclerView.ViewHolder { var part_order_id : TextView var part_name : TextView var part_avatar : ImageView var part_rel_layout : RelativeLayout var part_score : TextView constructor(itemView: View) : super(itemView) { part_rel_layout = itemView.findViewById(R.id.partRelLayout) part_order_id = itemView.findViewById(R.id.partorderTxtV) part_score = itemView.findViewById(R.id.partScoreTxtV) part_name = itemView.findViewById(R.id.parttxtV) part_avatar = itemView.findViewById(R.id.partAvatorImgV) } fun calculateScore(id : Int) : Int { val score = 10000 - (id * 500) return score } } class leaderHeaderViewHolder : RecyclerView.ViewHolder { var leader_header_txtV : TextView constructor(itemView : View) : super(itemView) { leader_header_txtV = itemView.findViewById(R.id.groupleadTxtV) } fun setGroupLetter(position : Int) : String { var result = (position / 3) var letter = "A" when(result) { 0 -> { letter = "GROUP A" } 1 -> { letter = "GROUP B" } 2 -> { letter = "GROUP C" } 3 -> { letter = "GROUP D" } } return letter } }
And on the fragment, the declaration of the RecyclerView:
private fun initLeaderRecyclerView(view : View) { leaderboardRecyclerView = view.findViewById(R.id.leaderBoardRecyclerView) populateDataForRecyclerView() leaderBoardRecyclerAdapter = leaderBoardRecyclerAdapter() leaderBoardRecyclerAdapter.setActivity(mainActivity) leaderBoardRecyclerAdapter.nameDataSet = participantNames leaderBoardRecyclerAdapter.participantAvatars = participantAvatars leaderboardRecyclerView.apply { adapter = leaderBoardRecyclerAdapter } var linearLayoutManager = LinearLayoutManager(mainActivity.applicationContext) linearLayoutManager.orientation = LinearLayoutManager.VERTICAL leaderboardRecyclerView.layoutManager = linearLayoutManager }
Any ideas of why might this be occuring?
Thank you in advance,
Lampros
-
how to display bytecode string in angular
i'm building a client side in angular 13. i'm getting a response of from the server in the form of a string. the string contains a bytecode that represnts a full html page(background img, text, css etc.) i was wondering how could i take that string and display it in angular. i know that you can display a string as an html like that:
<div [innerHTML]="response"></div>
but then i dont get the full html with the background images and everything thanks in advance
-
Import and Display Image as a Screen in Python Tkinter
I am using Visual Studio Code, Python, and Tkinter in this program and I want to import and display an image from my computer as a screen(it is in the end). I tried to import the image by copying a statement from a video example I However, when I run the program, it says
tkinter.TclError: bitmap "IMAGE.jpg" not defined
import tkinter from tkinter import Tk from PIL import ImageTk, Image screen = Tk() screen.iconbitmap("IMAGE.jpg")
-
AssertionError: Some Python objects were not bound to checkpointed values on Jetson Nano with TensorRT
I am trying to run Inference on my Jetson Nano with TensorRT, but this Error keeps popping up. I dont really work with
.pb
model rather than.h5
models thats why I am converting my Model to.pb
from.h5
.I am using this script to convert my model to a tensorrt model, its a modification of the nvidia docs scripts, but more or less the same : Nvidia Docs
import tensorflow as tf import os model_dir = 'fc_medium' tf_model_dir = 'final_vitis/float/'+ model_dir+'.h5' model = tf.keras.models.load_model(tf_model_dir) input_saved_model_dir = 'final_jetson/'+ model_dir+'/tf/' os.makedirs(input_saved_model_dir, exist_ok=True) tf.saved_model.save(model,input_saved_model_dir) output_saved_model_dir = 'final_jetson/'+ model_dir+'/tf_trt/' os.makedirs(output_saved_model_dir, exist_ok=True) from tensorflow.python.compiler.tensorrt import trt_convert as trt converter = trt.TrtGraphConverterV2(input_saved_model_dir=input_saved_model_dir) converter.convert() converter.save(output_saved_model_dir)
Traceback (most recent call last): File "app_jetson_tensorRT.py", line 21, in <module> converter.convert() File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 1216, in convert self._input_saved_model_tags) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py", line 900, in load result = load_internal(export_dir, tags, options)["root"] File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py", line 939, in load_internal ckpt_options, options, filters) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py", line 166, in __init__ self._restore_checkpoint() File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py", line 495, in _restore_checkpoint load_status.assert_existing_objects_matched() File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/util.py", line 831, in assert_existing_objects_matched (list(unused_python_objects),)) AssertionError: Some Python objects were not bound to checkpointed values, likely due to changes in the Python program: [<tf.Variable 'dense_4_m/bias/v:0' shape=(10,) dtype=float32, numpy=array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)>, <tf.Variable 'dense_3_m/bias/m:0' shape=(128,) dtype=float32, numpy= array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)>, <tf.Variable 'dense_1_m/kernel/m:0' shape=(784, 256) dtype=float32, numpy= array([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], dtype=float32)>, <tf.Variable 'dense_1_m/kernel/v:0' shape=(784, 256) dtype=float32, numpy= array([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], dtype=float32)>, <tf.Variable 'dense_4_m/bias/m:0' shape=(10,) dtype=float32, numpy=array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)>, <tf.Variable 'dense_3_m/kernel/v:0' shape=(128, 128) dtype=float32, numpy= array([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], dtype=float32)>, <tf.Variable 'dense_2_m/kernel/m:0' shape=(256, 128) dtype=float32, numpy= array([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], dtype=float32)>, <tf.Variable 'dense_1_m/bias/v:0' shape=(256,) dtype=float32, numpy= array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)>, <tf.Variable 'dense_2_m/bias/m:0' shape=(128,) dtype=float32, numpy= array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)>, <tf.Variable 'dense_3_m/bias/v:0' shape=(128,) dtype=float32, numpy= array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)>, <tf.Variable 'dense_4_m/kernel/m:0' shape=(128, 10) dtype=float32, numpy= array([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], dtype=float32)>, <tf.Variable 'dense_2_m/kernel/v:0' shape=(256, 128) dtype=float32, numpy= array([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], dtype=float32)>, <tf.Variable 'dense_3_m/kernel/m:0' shape=(128, 128) dtype=float32, numpy= array([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], dtype=float32)>, <tf.Variable 'dense_4_m/kernel/v:0' shape=(128, 10) dtype=float32, numpy= array([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], dtype=float32)>, <tf.Variable 'dense_1_m/bias/m:0' shape=(256,) dtype=float32, numpy= array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)>, <tf.Variable 'dense_2_m/bias/v:0' shape=(128,) dtype=float32, numpy= array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)>] WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-0.kernel WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-0.bias WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.kernel WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.bias WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-2.kernel WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-2.bias WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-3.kernel WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-3.bias WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-0.kernel WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-0.bias WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.kernel WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.bias WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-2.kernel WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-2.bias WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-3.kernel WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-3.bias WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
-
H264 decoder for RTSP stream inside the docker
I am working on Nvidia Jetson AGX Xavier within Dockerized container…I want to take input from RTSP stream…it’s encoding type is H264 and .avi video input.The input stream frame size is 1920x1080 (in code I am resizing that into 1280x720)
I have used this GStreamer pipeline
cap = cv2.VideoCapture(‘rtspsrc location=“rtsp_link” latency=200 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! appsink’, cv2.CAP_GSTREAMER)
It is able to read the frames but after reading frames, frames got completely blurred , that’s why object detection is not happening.
-
ImportError: cannot import name 'dtensor'
I just got a Jetson Nano and created my SD-Card with Jetpack 4.6.1. After that I installed TensorFlow like this: [Tensorflow-Install][1]
Than I wanted to create an mnist Model but it seems like I cant import Keras? Any Idea ?
I just install Tensorflow and upgraded all apt-get packages.
>>> import tensorflow.keras Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/keras/api/_v2/keras/__init__.py", line 12, in <module> from keras import __version__ File "/usr/local/lib/python3.6/dist-packages/keras/__init__.py", line 24, in <module> from keras import models File "/usr/local/lib/python3.6/dist-packages/keras/models/__init__.py", line 18, in <module> from keras.engine.functional import Functional File "/usr/local/lib/python3.6/dist-packages/keras/engine/functional.py", line 24, in <module> from keras.dtensor import layout_map as layout_map_lib File "/usr/local/lib/python3.6/dist-packages/keras/dtensor/__init__.py", line 22, in <module> from tensorflow.compat.v2.experimental import dtensor as dtensor_api # pylint: disable=g-import-not-at-top ImportError: cannot import name 'dtensor' >>>
I would appreciate any help! [1]: https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html
-
Getting private IP address from 4G Dongle
I'm currently trying to make SIM7600X 4G DONGLE from Waveshare work while connected to NVIDIA Jetson Nano (which basically has Ubuntu 18 installed).
I've followed the instructions from here and whenever to request a new IP address for a wwan0 device I get a private IP address like 10.166.242.230, for instance.
udhcpc: started, v1.27.2 udhcpc: sending discover udhcpc: sending select for 10.166.242.230 udhcpc: lease of 10.166.242.230 obtained, lease time 7200
Now I'm doubting this is just related to the dongle. Probably There's some weird DHCP configuration going on this UBUNTU 18 that messing things up.
Any ideas?
-
TensorFlow Lite FP16 Model is slower than normal TensorFlow Model on Jetson Nano
I wanted to compare TensorFlow to quantized TensorFlow Lite models. I am quantizing my models to FP16 and run them like seen below. The weird part is that for small models the TF Lite model is expected a lot faster than the TF model, but as the models get larger I see a drop in performance for the TF Lite models, but not for the TF models. Why is that the case? Is the Jetson Nano not optimized to run TF Lite models? I installed TF and TF Lite like this:
https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html
This is how I run my TF Lite models.
from re import I import tensorflow as tf from tensorflow import keras import argparse import pathlib import numpy as np import time import serial def create_batch(x, batch_size): size = x.shape[0] n = int(size/batch_size) batch_array = x.reshape(n, batch_size, 28, 28, 1) return batch_array def evaluate_model(interpreter, x_test, y_test, batch_size, runs): input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] prediction_digits = [] for test_image in x_test: # Pre-processing: add batch dimension and convert to float32 to match with # the model's input data format. test_image = np.expand_dims(test_image, axis=0).astype(np.float32) interpreter.set_tensor(input_index, test_image) # Run inference. interpreter.invoke() # Post-processing: remove batch dimension and find the digit with highest # probability. output = interpreter.tensor(output_index) digit = np.argmax(output()[0]) prediction_digits.append(digit) accurate_count = np.sum(y_test == prediction_digits) accuracy = accurate_count * 1.0 / len(prediction_digits) x_batch = create_batch(x_test, batch_size) # Run predictions on every image in the "test" dataset. prediction_digits = [] latency = [] fps = [] for _ in range(runs): for batch in x_batch: ser.write(b'\x00') time1 = time.time() for test_image in batch: # Pre-processing: add batch dimension and convert to float32 to match with # the model's input data format. test_image = np.expand_dims( test_image, axis=0).astype(np.float32) interpreter.set_tensor(input_index, test_image) # Run inference. interpreter.invoke() # Post-processing: remove batch dimension and find the digit with highest # probability. output = interpreter.tensor(output_index) prediction_digits.append(output) time2 = time.time() ser.write(b'\x00') batch_time = time2 - time1 frames = float(x_batch.shape[1] / batch_time) fps.append(frames) latency.append(batch_time) return accuracy, latency, fps def convert(model_type, runs, batch_size): model_dir = 'final_vitis/float/'+model_type+'.h5' model = keras.models.load_model(model_dir) converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_types = [tf.float16] tflite_fp16_model = converter.convert() tflite_model_fp16_file = pathlib.Path( "final_vitis/quant_model/"+model_type+".tflite") tflite_model_fp16_file.write_bytes(tflite_fp16_model) interpreter_fp16 = tf.lite.Interpreter( model_path=str(tflite_model_fp16_file)) interpreter_fp16.allocate_tensors() x_test = np.load("Data/x_test.npy") y_test = np.argmax(np.load("Data/y_test.npy"), axis=1) accuracy = [] accuracy, latency, fps = evaluate_model( interpreter_fp16, x_test, y_test, batch_size, runs) mean_latency = np.mean(np.array(latency)) mean_fps = np.mean(np.array(fps)) std_latency = np.std(np.array(latency)) std_fps = np.std(np.array(fps)) print('\n') print("Results: ") print(" Accuracy: {:.4f}".format(accuracy)) print(" Mean Latency: {:.4f}".format(mean_latency)) print(" Mean FPS: {:.4f}".format(mean_fps)) print(" STD Latency: {:.4f}".format(std_latency)) print(" STD FPS: {:.4f}".format(std_fps)) print(" Lowest period: {:.4f}".format(np.min(latency))) print(" Highest period: {:.4f}".format(np.max(latency))) print(" Total time: {:.4f}".format(np.sum(latency))) data = {"Accuracy": '{:.4f}'.format(accuracy), "Mean Latency": '{:.4f}'.format(mean_latency), "Mean FPS": '{:.4f}'.format(mean_fps), "STD Latency": '{:.4f}'.format(std_latency), "STD FPS": '{:.4f}'.format(std_fps), "Runs": '{}'.format(runs), "Batch Size": '{}'.format(batch_size), "Lowest period": "{:.4f}".format(np.min(latency)), "Highest period": "{:.4f}".format(np.max(latency))} def main(): ap = argparse.ArgumentParser() ap.add_argument('-m', '--model', type=str, required=True, help='Selects the Model type that has to be trained') ap.add_argument('-r', '--runs', type=int, default=10, help='Number of inference runs') ap.add_argument('-b', '--batch_size', type=int, default=10, help='Defines how big an inference batch is.') args = ap.parse_args() convert(args.model, args.runs, args.batch_size) if __name__ == "__main__": global ser ser = serial.Serial('/dev/ttyTHS1') main() ser.close()
And these are some of the Models:
def create_fc_small(input_shape, output_shape): x = x_in = Input(input_shape, name='input_1_m') x = Flatten(name='flatten_1_m')(x) x = Dense(16, name='dense_1_m')(x) x = Activation("relu", name='act_1_m')(x) x = Dense(128, name='dense_2_m')(x) x = Activation("relu", name='act_2_m')(x) x = Dense(output_shape, name='dense_3_m')(x) x = Activation("softmax", name='act_3_m')(x) model = Model(inputs=[x_in], outputs=[x]) return model def create_fc_medium(input_shape, output_shape): x = x_in = Input(input_shape, name='input_1_m') x = Flatten(name='flatten_1_m')(x) x = Dense(256, name='dense_1_m')(x) x = Activation("relu", name='act_1_m')(x) x = Dense(128, name='dense_2_m')(x) x = Activation("relu", name='act_2_m')(x) x = Dense(128, name='dense_3_m')(x) x = Activation("relu", name='act_3_m')(x) x = Dense(output_shape, name='dense_4_m')(x) x = Activation("softmax", name='act_4_m')(x) model = Model(inputs=[x_in], outputs=[x]) return model def create_fc_large(input_shape, output_shape): x = x_in = Input(input_shape, name='input_1_m') x = Flatten(name='flatten_1_m')(x) x = Dense(512, name='dense_1_m')(x) x = Activation("relu", name='act_1_m')(x) x = Dense(1024, name='dense_2_m')(x) x = Activation("relu", name='act_2_m')(x) x = Dense(512, name='dense_3_m')(x) x = Activation("relu", name='act_3_m')(x) x = Dense(256, name='dense_4_m')(x) x = Activation("relu", name='act_4_m')(x) x = Dense(output_shape, name='dense_5_m')(x) x = Activation("softmax", name='act_5_m')(x) model = Model(inputs=[x_in], outputs=[x]) return model
These are the Results of the Tensorflow Lite Models compared to the Tensorflow Models
"TFLite_fc_large": { "Accuracy": "0.9826", "Batch Size": "100", "Highest period": "0.4886", "Lowest period": "0.1725", "Mean FPS": "576.5339", "Mean Latency": "0.1744", "Runs": "4", "STD FPS": "25.5312", "STD Latency": "0.0205" }, "TFLite_fc_medium": { "Accuracy": "0.9789", "Batch Size": "1000", "Highest period": "0.5136", "Lowest period": "0.2015", "Mean FPS": "4815.7471", "Mean Latency": "0.2133", "Runs": "6", "STD FPS": "518.2765", "STD Latency": "0.0521" }, "TFLite_fc_small": { "Accuracy": "0.9561", "Batch Size": "2500", "Highest period": "0.4514", "Lowest period": "0.1795", "Mean FPS": "13090.5383", "Mean Latency": "0.2034", "Runs": "6", "STD FPS": "2248.3026", "STD Latency": "0.0724" }, "TF_fc_large": { "Accuracy": "0.9826", "Batch Size": "1000", "Highest period": "0.5807", "Lowest period": "0.2713", "Mean FPS": "3539.5740", "Mean Latency": "0.2854", "Runs": "6", "STD FPS": "262.9691", "STD Latency": "0.0396" }, "TF_fc_medium": { "Accuracy": "0.9789", "Batch Size": "1000", "Highest period": "0.5685", "Lowest period": "0.2621", "Mean FPS": "3656.5702", "Mean Latency": "0.2762", "Runs": "6", "STD FPS": "265.5665", "STD Latency": "0.0388" }, "TF_fc_small": { "Accuracy": "0.9562", "Batch Size": "1000", "Highest period": "0.5525", "Lowest period": "0.2555", "Mean FPS": "3756.8987", "Mean Latency": "0.2689", "Runs": "6", "STD FPS": "274.3545", "STD Latency": "0.0377" }