docker <.h> no such file or directory
what could be the reason gcc compiler throwing file not found error when compiled in docker container environment?
In docker environment, error threw only one header file among 10 header files. It didn't throw for rest 9 include <.h> statement. I don't understand.
It worked perfectly fine when compiled normally. I am pretty sure that I added all include library files on docker and made reference to it using -I, -L, and -l option in gcc command.
See also questions close to this topic
-
unable to select packages curl for Alpine 3.12
ERROR: unable to select packages:
curl-7.76.1-r0:
breaks: world[curl=7.69.1-r3]
The command '/bin/sh -c apk upgrade -U -a && apk add --no-cache tini=0.19.0-r0 curl=7.69.1-r3 libcrypto1.1=1.1.1k-r0 libssl1.1=1.1.1k-r0' returned a non-zero code: 1
script returned exit code 1
Getting the similar error even though we have updated the curl version to latest version curl-7.76.1-r0:
Alpine Version: 3.12
-
Losing RabbitMQ messages in a Docker container
I've setup a Minikube cluster that runs RabbitMQ and KEDA. The aim is to scale containers based on RabbitMQ messages in a single queue. The scaling mechanism works fine, whenever I send a message into the queue a container spins up. The problem is that this container is also a consumer of the same queue and isn't receiving any messages. I can see that the message is being consumed by using the web dashboard, and can see that it's the correct consumer by giving it a unique username. Below is my code for publishing the message:
const cluster = "amqp://EmZn4ScuOPLEU1CGIsFKOaQSCQdjhzca:dJhLl2aVF78Gn07g2yGoRuwjXSc6tT11@192.168.49.2:30861"; amqp.connect(cluster, (error0, connection) => { if (error0) throw error0; connection.createChannel((error1, channel) => { if (error1) throw error1; const queue = "files"; channel.assertQueue(queue, { durable: true, arguments: { "x-message-ttl": 30000 } }); msgJson = JSON.stringify(newUser); channel.sendToQueue(queue, Buffer.from(msgJson)); console.log("Message sent:" + msgJson); }); });
And the code for receiving:
const cluster = "amqp://ffmpeg:ffmpeg@192.168.49.2:30861" amqp.connect(cluster, (error0, connection) => { if (error0) throw error0; connection.createChannel((error1, channel) => { if (error1) throw error1; const queue = "files"; channel.assertQueue(queue, { durable: true, arguments: { "x-message-ttl": 30000 } }); console.log(`Waiting for messages in ${queue}`); channel.consume(queue, async(msg) => { console.log("Message received: " + msg.content); user = JSON.parse(msg.content); channel.close(); connection.close(); await delay(5000); transcode(user); }, { noAck: true }); }); });
The real confusing thing is, when the rest of the container code (the transcode() function) throws an error and exits the process, the message is also outputted. But when the code is working, no message is outputted.
Honestly I have no real idea of why this is happening. Any suggestions?
-
How to create a horizontally scalable machine learning web application on google compute engine
I have a machine learning web application built with FLASK. I deployed it in a VM instance in google compute engine and it works. I want to make it scalable so that when more users access it, different VM instances will be assigned to each user, then there will be no conflict.
I have followed the Creating a HTTP load balancer in Google Cloud Platform and created custom image, instance template and instance group successfully.
But it did not work as I want. There must be something wrong with my steps.
I launched the web in an instance with "flask run" manually. This is not correct, right? Because new created instances wont do this themselves. I should host it with, for example, Nginx?
Different instance has different external IP address. Users should access the web by the same link. I did not separate the frontend and backend (machine learning model). Should I separate them and deploy them to different instances? Is it possible that the whole group has the same IP address?
When I followed that tutorial, I noticed that the author set the port to 80 in several places but flask uses the port 5000, will there be conflict? I tried to change port to 5000 when creating the sample-map for loading balance step, it gave me some errors about the port ranges.
Thanks in advance.
-
'unused variable' warning in GCC for 'extern' variables
I have a constant external variable that I declare in the .h file:
globals.h
extern const int a;
The variable is initialized in the corresponding .c file:
main.c
#include "globals.h" const int a = 1;
However, the variable is not used in the .c file where it is initialized, but it is in another .c file. GCC gives the warning
[-Werror=unused-variable]
. Probably because it does not realize that the variable is in fact used in another file. I am relatively new to C. How do I fix this issue? -
Fix for universal reference cannot bind to packed struct fields in gcc/g++ and side effects
This isn't the same as other answers for similar question. They need source code modification in the caller . This fixes the problem in callee without modifying caller's code. Passing reference of packed struct member to template. gcc bug?
This code compiles fine in clang without any change but in g++ it throws an error about the packed struct field cannot be bound to a reference.
Also
make_pair(packed_bitfield, packed_bitfield)
gives similar compile error.If the code which calls universal reference function
f
can be altered, I canpass f(static_cast<const int &>(x.i)) or as_const(x.i)
to universal reference function.The const creates a temporary which is aligned. But in many cases i cannot change the calling code.
In what cases will my fix fail to work ? Am i missing any corner cases ?
#include<iostream> using namespace std; template<typename T> int f(T&& args) { return args; } struct X { char c; unsigned int i; } __attribute__((packed)); int main() { X x; x.i = 3; cout <<"x.i= "<<f(x.i)<<endl; return 0; }
test3.cpp: In function ‘int main()’: test3.cpp:16:22: error: cannot bind packed field ‘x.X::i’ to ‘unsigned int&’ 16 | cout <<"x.i= "<<f(x.i)<<endl; | ~~^
To fix this add
const T && ref
universal ref and another overload template functionf(const L& arg)
which is only instantiated if L is a non rvalue reference. The new code becomestemplate<typename T> expr_lhs<const T> f(const T && head) { cout <<"in const universal ref "; return expr_lhs<const T>(forward<const T>( head )) ; } template<typename T,typename enable_if< !is_rvalue_reference<T>::value , void >::type* = nullptr > int f(const T &head){ cout <<"in const non rvalue ref "; return 17; }
compiles and produces output
- 5= in const universal ref 5
- temp rvalue = in const universal ref 0
- rvalue = in const universal ref 3
- i = in const non rvalue ref 17
- x.i= in const non rvalue ref 17
-
Compiler produces different binary from same Code for different users on the same server
The difference are a few kilobytes of binary size and significant differences in the output of
cuobjdump -sass
.Neither
printenv
, nornvcc -dryrun
org++ -dumpspecs
revealed any significant difference to me. I can also rule out the.bashrc
and nvidias.nv
cache.What else can I do to find out why the same code (and Makefile) compiles differently on different user accounts with
nvcc
andg++
6 as host compiler?