C++  Given multiple binary strings, produce binary string where nth bit is set if in all given strings is nth bit same
On input I am given multiple uint32_t
numbers, which are in fact binary strings of length 32. I want to produce binary string ( a.k.a. another uint32_t
number ) where nth bit is set to 1
if nth bit in every given string from input is same.
Here is simple example with 4 bit string ( just more simple instance of same problem ) :
input: 0011, 0101, 0110
output: 1000
because: first bit is same in every string on input, therfore first bit in output will be set to 1
and 2nd,3rd and 4th will be set to 0
because they have different values.
What is the best way to produce output from given input? I know that I need to use bitwise operators but I don't know which of them and in which order.
uint32_t getResult( const vector< uint32_t > & data ){
//todo
}
2 answers

You want the bits where all the source bits are 1 and the bits where all the source bits are 0. Just AND the source values and the NOT of the source values, then OR the results.
uint32_t getResult( const vector< uint32_t > & data ){ uint32_t bitsSet = ~0; uint32_t bitsClear = ~0; for (uint32_t d : data) { bitsSet &= d; bitsClear &= ~d; } return bitsSet  bitsClear }

First of all you need to loop over the vector, of course.
Then we can use XOR of the current element and the next element. Save the result.
For the next iteration, do the same: XOR of current element with the next element. But then bitwise OR with the saved result of the previous iteration. Save this result. Then continue with this until you have iterated over all (minus one) elements.
The saved result is the complement of the what you want.
Taking your example numbers (
0011
,0101
and0110
) then the first iteration we have0011 ^ 0101
which results in0110
. The next iteration we have0101 ^ 0110
which results in0011
. Bitwise OR with the previous result (0110  0011
) gives0111
. End of loop, and bitwise complement give the result1000
.
See also questions close to this topic

Encapsulating ublas and overloading the const reference to the operator()
Considering the following toy example, where I declare a class which encapsulates
ublas
from boost libraries:#include <boost/numeric/ublas/matrix_sparse.hpp> #include <iostream> namespace ublas = boost::numeric::ublas; class UblasEncapsulated { public: ublas::compressed_matrix<float>::reference operator()(int i, int j){ std::cout << "Non const reference" << std::endl; MtrUpdated_ = true; return mtr_(i, j); } ublas::compressed_matrix<float>::const_reference operator()( int i, int j) const { std::cout << "Const reference" << std::endl; return mtr_(i, j); } UblasEncapsulated() { MtrUpdated = false; } private: ublas::compressed_matrix<float> mtr_(3, 3); bool MtrUpdated_; }; int main() { UblasEncapsulated foo; foo(2, 0) = 1.0f; float const foo_float = foo(2, 0); return 0; }
I was expecting the output
Non constant reference Constant reference
But I got
Non constant reference Non constant reference
What am I doing wrong? How can I properly track when
mtr_
could have its values changed? 
Specializing and or Overloading member function templates with variadic parameters
Trying to resolve overload resolution for class member: static function template overload  partial specialization.
I currently have a class declared / defined as such:
Note: my use of
Param a
,Param b
,Param c
etc. are not related to the actual declarations / definitions directly. These can be any arbitrary type that is passed into the functions for example: it could beint a
,enum b
,char c
. I'm just using this to only show the pattern of the declarations, however all of the different engines take the same 3 different parameters.SomeEngine.h
#ifndef SOME_ENGINE_H #define SOME_ENGINE_H class SomeEngine { public: SomeEngine() = delete; static engineA& getEngineA( Param a, Param b, Param c ); static engineB& getEngineB( Param a, Param b, Param c ); // ... more static functions to return other engines template<class Engine> static Engine& getEngine( Param a, Param b, Param c ); };
SomeEngine.cpp
#include "SomeEngine.h" template<> EngineA& SomeEngine::getEngine( Param a, Param b, Param c ) { return getEngineA( a, b, c ); } template<> EngineB& SomeEngine::getEngine( Param a, Param b, Param c ) { return getEngineB( a, b, c ); }
The above design pattern for the function template where I was able to specialize the class to return the appropriate Engine Type using a single generic
getEngine()
call compiles and works fine. I have a non class member function template that takes aclass Engine
as one of its template parameters... This is defined in the same header above outside of any class and after the first two classes that it will use.template<class Engine, typename T> T generateVal( param a, param b, param c ) { static T retVal = 0; static Engine engine = SomeEngine::getEngine<Engine>( a, b , c ); }
And the above works except the function shown here is not complete. It relies on another class. The other class itself has a similar pattern as the one above; it has a deleted default constructor, and a bunch of static methods to return the different types of objects; however in the 2nd class, almost all of the static methods themselves are function templates, some have overloaded versions while others have more than one template parameter. It is also declared in the same header file above. It looks something like this:
class SomeOther { public: SomeOther() = delete; template<class IntType = int> static otherA<IntType>& getOtherA( IntType a, IntType b ); template<class RealType = double> static otherB<RealType>& getOtherB( RealType a, RealType B ); template<class IntType = int> static otherC<IntType>& getOtherC( IntType a ); template<class RealType = double> static otherD<RealType>& getOtherD( RealType a ); template<class IntType = int> static otherE<IntType>& getOtherE(); template<class IntType = int> static otherE<IntType>& getOtherE( IntType a, IntType b ); template<class IntType = int> static otherE<IntType>& getOtherE( std::initializer_list<double> a ); template<class IntType = int, class X> static otherE<IntType>& getOtherE( std::size_t a, double b, double c, X x ); };
I'm trying to do the something similar with the 2nd class above to have a generic function template such that I can pass to it the template parameter
class Other
except thatclass Other
depends on its own template arguments and the internal function calls may have a different amount of parameters.This was leading me into the use of variadic templates for the declarations of this class's function.
I had tried something like this:
template< typename Type, template<typename, class...> class Other, class... OtherParams, class... FuncParams> static Other<Type, OtherParams...>& getOther( FuncParams... params );
And then my function that I've shown above that was not complete I tried this when adding in the support for the 2nd class:
template< class Engine, typename Type, template<typename, class...> class Other, class... OtherParams, class... FuncParams> Type generate( Param a, Param b, Param c, FuncParams... params ) { static Type retVal = 0; static Engine engine = SomeEngine::getEngine<Engine>( a, b, c ); static Other<Type, OtherParams...> other = SomeOther::getOther<Type, OtherParams...>> ( params... ); retVal = other( engine ); return retVal; }
This is how I would be using the 2nd class above. Here are my attempts of trying to specialize a couple of the getOther() functions in the corresponding cpp file
template<typename Type, template<typename, class...> class Other, class... OtherParams, class... FuncParams> otherA<Type>& SomeOther::getOther( FP... params ) { return getOtherA( params... ); } template<typename Type, template<typename, class...> class Other, class... OtherParams, class... FuncParams> otherB<Type>& SomeOther::getOther( FP... params ) { return getOtherB( params... ); }
This doesn't compile it complains that the function definition does not match an existing declaration. I even tried to write an overload in the header file and I keep getting the same errors. I don't know if it is a syntax error or not. I've tried far to many things to list here, I've searched all over the place looking for something similar but can not seem to find anything relevant.
I would like to know if something like this can be done; and if so what needs to be changed above in order for it to at least compile and build; so I can start to test it during runtime; then move on to add other existing types.
I would like to try and keep the same design pattern of the first class with the 2nd. My stand alone function template is the function that will be called and depending on it's template parameters; it should know which engine  other types to call.

Construct object from serialized thrift string
I have two thrift structs:
struct A { 1: optional i32 key; 2: optional i32 value; } struct B { 1: optional A keys; 2: optional map<i32, i32> some_data; }
Now I have a string which is a serialized version of B. How do I reconstruct B from the string? Is there a general way to do that? Also I am curious about the error handling: what if my string is in illform?

Add two numbers using bit manipulation
I'm working on the following practice problem from GeeksForGeeks:
Write a function Add() that returns sum of two integers. The function should not use any of the arithmetic operators (+, ++, –, , .. etc).
The given solution in C# is:
public static int Add(int x, int y) { // Iterate till there is no carry while (y != 0) { // carry now contains common set bits of x and y int carry = x & y; // Sum of bits of x and y where at least one of the bits is not set x = x ^ y; // Carry is shifted by one so that adding it to x gives the required sum y = carry << 1; } return x; }
Looking at this solution, I understand how it is happening; I can follow along with the debugger and anticipate the value changes before they come. But after walking through it several times, I still don't understand WHY it is happening. If this was to come up in an interview, I would have to rely on memory to solve it, not actual understanding of how the algorithm works.
Could someone help explain why we use certain operators at certain points and what those totals are suppose to represent? I know there are already comments in the code, but I'm obviously missing something...

How to interpret memory as big endian int16_t in x64 assembly?
I'd like to better understand how an Intel CPU might efficiently pull big endian
int16_t
's from memory. My assembly's not the greatest so I'm looking at the assembly results from clang.One way to accomplish this in C would be:
int16_t n = bytes[i+2]  (int16_t)bytes[i+1] << 8;
Clang (with no flags) converts this line to:
movl 36(%rbp), %edx addl $2, %edx movslq %edx, %rax movzbl 32(%rbp,%rax), %edx movl 36(%rbp), %esi addl $1, %esi movslq %esi, %rax movzbl 32(%rbp,%rax), %esi movw %si, %di movswl %di, %esi shll $8, %esi orl %esi, %edx movw %dx, %di movw %di, 46(%rbp)
I know there's a lot going on in the C code but this seems kinda crazy since it seems like just a matter of masks and moves.
Furthermore it seems like there must be something that could help out in the 981 x64 instructions (or whatever number you like to use).
Is there a more succinct way to do read memory as a big endian int16_t?

negating a value returns expected value 1
Why is the result of negating a value not the expected value (expectedValue 1).
For example :
int a = ~(1<<2);// 5 expected: 4 int c = ~((1<<2)1); // 4 , expected: 3 int d = ~(4); // 5 , expected: 4

gles glsl bit wise operations problems
I am trying to create a shader that uses bitwise commands for a mobile application. I am using glsl version 320 es. To demonstrate the problem I have created a shadertoy example: https://www.shadertoy.com/view/MsVyRw Which should show a red screen. The screen appears red when opened from my galaxy s8 as well. When running my application with the following fragment shader:
#version 320 es precision highp float; out vec4 fragColor; void main() { uint x = uint(0xec140e57); uint tmp0 = x >> uint(4); uint tmp1 = uint(0xec140e57) >> uint(4); if(tmp0 == tmp1){ fragColor = vec4(1.0, 0.0, 0.0,1.0); } else{ fragColor = vec4(0.0, 0.0, 1.0,1.0); } };
The screen appears blue. However if I change
uint x = uint(0xec140e57); uint tmp0 = x >> uint(4); uint tmp1 = uint(0xec140e57) >> uint(4);
to
uint tmp0 = uint(0xec140e57) >> uint(4); uint tmp1 = uint(0xec140e57) >> uint(4);
the screen appears red.
It is definitely not a problem with the gpu ALUs since it works with shadertoy. Is there something that I am missing with the preprocessor flags that would allow this sort of operation?

What am I doing wrong? Conversion from HEX to DEC
I'm trying to convert a HEX number to a DEC. The HEX is inverted.
F6FD
it should beFDF6
int a = 0xFD; int b = 0xF6 << 8; int res = a  b;
And output =
10
but I expect522
. And if I do in this wayunsigned int res2 = (unsigned char) 0xFD  (unsigned char) 0xF6 << 8;
The output is
65014
and not522
. What am I doing wrong? 
Searching for a bit pattern in an unsigned int
I'm learning C through Kochan's Programming in C. One of the exercises is the following:
Write a function called
bitpat_search()
that looks for the occurrence of a specified pattern of bits inside anunsigned int
. The function should take three arguments, and should be called as such:bitpat_search (source, pattern, n)
The function searches for the integer
source
, starting at the leftmost bit, to see if the rightmost n bits ofpattern
occur insource
. If the pattern is found, have the function return the number of the bit at which the pattern begins, where the leftmost bit is number 0. If the pattern is not found, then have the function return 1. So, for example, the callindex = bitpat_search (0xe1f4, 0x5, 3);
causes the
bitpat_search()
function to search the number 0xe1f4 (= 1110 0001 1111 0100 binary) for the occurrence of the threebit pattern 0x5 (= 101 binary). The function returns11
to indicate that the pattern was found in thesource
beginning with bit number 11.Make certain that the function makes no assumptions about the size of an
int
.This is the way I implemented the function:
#include <stdio.h> int bitpat_search(unsigned int source, unsigned int pattern, int n); int int_size(void); int main(void) { printf("%i\n", bitpat_search(0xe1f4, 0x5, 3)); return 0; } int bitpat_search(unsigned int source, unsigned int pattern, int n) { int size = int_size(); pattern <<= (size  n); unsigned int compare = source; int bitnum = 0; while (compare) { compare >>= (size  n); compare <<= (size  n); if (compare & pattern) { return bitnum; } else { source <<= 1; bitnum++; compare = source; } } return 1; } // Calculates the size of an integer for a particular computer int int_size(void) { int count = 0; unsigned int x = ~0; while (x) { ++count; x >>= 1; } printf("%i\n", count); return count; }
First, I calculate the size of an integer (can't use
sizeof()
). Then, I align the pattern that we are looking for so that it starts from the MSB. I create a temporary variablecompare
and assign it the value ofsource
and I also initialize a variablebitnum
to 0; it will keep track of the position of the bits we are comparing.Within the loop I shift
compare
to the right and left (adding 0's to the right and left of the bits that will be compared to the bit pattern), then I compare the values: if true, the bit number is returned, otherwise, source is shifted once to the left and then assigned tocompare
(this essentially shifts the position of the bit that we are comparing incompare
to the right) andbitnum
is incremented. The loop stops executing ifpattern
wasn't found insource
and1
is returned, as per the instructions.However, my program's output turns out to be 14, not 11. I followed the program through pencil and paper and didn't understand what went wrong... Help?

How does the category bit mask work in SpriteKit?(0xFFFFFFFF)
I have trouble understanding how this bit mask work. I know that this is set to 0xFFFFFFFF by default and if you want two bodies to contact you set two different bitmask.
My problem is how can I set 2(or more) different Bitmask, how can I change the default value to get a different value? I know that there are 32 different categories. Could you give me some of them?

Convert a uint32_t from little endian to big endian
Hi I've to create a function that received a
uint32_t
number in C in little endian and return the same number in big endian.uint32_t byteswap(uint32_t n)
example: input: 0x0a0b0c0d output: 0x0d0c0b0a
My idea was to convert this number in decimal and divide it by 10 and multiply the rest of the division for the power of 10, then I riconvert in hexadecimal, is there a better way to do this? Thanks