C++  Given multiple binary strings, produce binary string where nth bit is set if in all given strings is nth bit same
On input I am given multiple uint32_t
numbers, which are in fact binary strings of length 32. I want to produce binary string ( a.k.a. another uint32_t
number ) where nth bit is set to 1
if nth bit in every given string from input is same.
Here is simple example with 4 bit string ( just more simple instance of same problem ) :
input: 0011, 0101, 0110
output: 1000
because: first bit is same in every string on input, therfore first bit in output will be set to 1
and 2nd,3rd and 4th will be set to 0
because they have different values.
What is the best way to produce output from given input? I know that I need to use bitwise operators but I don't know which of them and in which order.
uint32_t getResult( const vector< uint32_t > & data ){
//todo
}
2 answers

You want the bits where all the source bits are 1 and the bits where all the source bits are 0. Just AND the source values and the NOT of the source values, then OR the results.
uint32_t getResult( const vector< uint32_t > & data ){ uint32_t bitsSet = ~0; uint32_t bitsClear = ~0; for (uint32_t d : data) { bitsSet &= d; bitsClear &= ~d; } return bitsSet  bitsClear }

First of all you need to loop over the vector, of course.
Then we can use XOR of the current element and the next element. Save the result.
For the next iteration, do the same: XOR of current element with the next element. But then bitwise OR with the saved result of the previous iteration. Save this result. Then continue with this until you have iterated over all (minus one) elements.
The saved result is the complement of the what you want.
Taking your example numbers (
0011
,0101
and0110
) then the first iteration we have0011 ^ 0101
which results in0110
. The next iteration we have0101 ^ 0110
which results in0011
. Bitwise OR with the previous result (0110  0011
) gives0111
. End of loop, and bitwise complement give the result1000
.
See also questions close to this topic

count number of partitions of a set with n elements into k subsets
This program is for count number of partitions of a set with n elements into k subsets I am confusing here
return k*countP(n1, k) + countP(n1, k1);
can some one explain what is happening here? why we are multiplying with k?NOTE>I know this is not the best way to calculate number of partitions that would be DP
// A C++ program to count number of partitions // of a set with n elements into k subsets #include<iostream> using namespace std; // Returns count of different partitions of n // elements in k subsets int countP(int n, int k) { // Base cases if (n == 0  k == 0  k > n) return 0; if (k == 1  k == n) return 1; // S(n+1, k) = k*S(n, k) + S(n, k1) return k*countP(n1, k) + countP(n1, k1); } // Driver program int main() { cout << countP(3, 2); return 0; }

How to use DrawTransparentBitmap() in C++ Builder
Just trying to figure out how to use DrawTransparent in C++ Builder to draw a bitmap or tbitmap with an alpha channel so that the drawn image is semitransparent with the background image.
I looked all over the place and on this site but can't find anything other than a note that this as well as DrawTransparentBitmap exists..
In the help it is listed as follows:
virtual void __fastcall DrawTransparent(TCanvas* ACanvas, const System::Types::TRect &Rect, System::Byte Opacity);
However there are no code examples. The compiler doesn't recognize the procedure name and it does not appear as a method of tbitmap...
I am still new to C++ and I could really use some help with this...

Multiple statements in a ranged for loop
I'd like to know if it's possible to convert this expression
vector<Mesh>::iterator vIter; for(int count = 0, vIter = meshList.begin(); vIter < meshList.end(); vIter++, count++) { ... }
into something along the lines of C++ 11
I'd like to get something like this:
for(auto count = 0, auto mesh : meshList; ; count++) { ... }
Is there a way to do this?

How to calculate the log2 of integer in C as precisely as possible with bitwise operations
I need to calculate the entropy and due to the limitations of my system I need to use restricted C features (no loops, no floating point support) and I need as much precision as possible. From here I figure out how to estimate the floor log2 of an integer using bitwise operations. Nevertheless, I need to increase the precision of the results. Since no floating point operations are allowed, is there any way to calculate
log2(x/y)
withx < y
so that the result would be something likelog2(x/y)*10000
, aiming at getting the precision I need through arithmetic integer? 
Javascript bitwise operator with negative number
i have many confusion about this operator, how these operator deal with negative number, what is the difference between Right shift and Right shift with zero. please anyone explain in detail...

Java Precedence  Casting and Bitwise Operators
I am having a hard time understanding some code that shows an example how a double in Java could be transformed into a byte[] and vice versa.
Here is the code being used to transform a double into a byte[]:
public static byte [] doubleToByteArray (double numDouble) { byte [] arrayByte = new byte [8]; long numLong; // Takes the double and sticks it into a long, without changing it numLong = Double.doubleToRawLongBits(numDouble); // Then we need to isolate each byte // The casting of byte (byte), captures only the 8 rightmost bytes arrayByte[0] = (byte)(numLong >>> 56); arrayByte[1] = (byte)(numLong >>> 48); arrayByte[2] = (byte)(numLong >>> 40); arrayByte[3] = (byte)(numLong >>> 32); arrayByte[4] = (byte)(numLong >>> 24); arrayByte[5] = (byte)(numLong >>> 16); arrayByte[6] = (byte)(numLong >>> 8); arrayByte[7] = (byte)numLong; for (int i = 0; i < arrayByte.length; i++) { System.out.println("arrayByte[" + i + "] = " + arrayByte[i]); } return arrayByte; }
And here is the code being used to tranform the byte[] back to a double:
public static double byteArrayToDouble (byte [] arrayByte) { double numDouble; long numLong; // When putting byte into long, java also adds the sign // However, we don't want to put bits that are not from the orignal value // // The rightmost bits left unaltered because we "and" them with a 1 // The left bits become 0 because we "and" them with a 0 // // We are applying a "mask" (& 0x00 ... FFL) // 0 & 0 = 0 // 0 & 1 = 0 // 1 & 0 = 0 // 1 & 1 = 1 // // So, the expression will put byte in the long (puts it into the right most position) // Then we apply mask to remove the sign applied by java // Then we move the byte into its position (shift left 56 bits, then 48 bits, etc.) // We end up with 8 longs, that each have a byte set up in the appropriate position // By doing an  with each one of them, we combine them all into the orignal long // // Then we use Double.longBitsToDouble, to convert the long bytes into double. numLong = (((long)arrayByte[0] & 0x00000000000000FFL) << 56)  (((long)arrayByte[1] & 0x00000000000000FFL) << 48)  (((long)arrayByte[2] & 0x00000000000000FFL) << 40)  (((long)arrayByte[3] & 0x00000000000000FFL) << 32)  (((long)arrayByte[4] & 0x00000000000000FFL) << 24)  (((long)arrayByte[5] & 0x00000000000000FFL) << 16)  (((long)arrayByte[6] & 0x00000000000000FFL) << 8)  ((long)arrayByte[7] & 0x00000000000000FFL); numDouble = Double.longBitsToDouble(numLong); return numDouble; }
Okay, and here is the part I don't quite get.
((long)arrayByte[0] & 0x00000000000000FFL) << 56
It seems as though the casting happens before the actual bitwise operation, because the author says that
the expression will put byte in the long [...] Then we apply mask to remove the sign applied by java
How come the byte is being transformed into a long before it being actually casted? Shouldn't the operation resemble this?
(((long)arrayByte[0]) & 0x00000000000000FFL) << 56
Or is there something else I don't understand?

Java bitwise causing strange results
I am trying to use an int to represent a register value. I need various parts of the number (in its binary form) to set the state for control lines etc.
My code works fine until I get to number 4096 at which points my boundaries stop behaving.
my boundaries are defined as follows:
bit 1 to bit 2, bit 3 bit 6, 711, 1213, 14n
I use the following code to convert the boundaries bits into integers:
public int getNToKBits(int leftMostBit, int rightMostBit){ int subBits = (((1 << leftMostBit)  1) & (value >> (rightMostBit  1))); return subBits; }
but when I try to split the number 4096 into these boundries I get the following:
b: 00, 10, 10000, 0000, 00 d: 0, 2, 64, 0, 0
I know, there aren't enough bits to make 64!!
what I expect is
b: 00, 10, 00000, 0000, 00 d: 0, 2, 0, 0, 0
It as expected with number less that 4096. Perhaps its a change in the way java treats numbers larger that 4096?

Bitwise operation on enum which is not marked by [Flags] attribute
I am comparing a variable to many enum values, but the IntelliSense is giving me this warning:
if (val == MyEnum.Value1  MyEnum.Value2  MyEnum.Value3){ //code }
My enum looks like this:
public enum MyEnum { [Description("Value1")] Value1 = 0, [Description("Value2")] Value2 = 1, [Description("Value3")] Value3 = 2, }
What is it and what should I do? Is it safe? All i want to avoid is having to write the long version of an if multivalue comparison block.

Error. Cannot assign value of type '[UInt32]' to type 'UInt32'
I get an error when i run my code saying : Cannot assign value of type '[UInt32]' to type 'UInt32' I have added some decent amount of code here, but the problem is within this line : torpedoNode.physicsBody?.contactTestBitMask = alienCategories I have done some research and tried to solve the problem on my own but i`m stuck with this error. What am i doing wrong?
let alienCategories:[UInt32] = [(0x1 << 1),(0x1 << 2),(0x1 << 3)] let alienTextureNames:[String] = ["alien1","alien2","alien3"] let photonTorpedoCategory:UInt32 = 0x1 << 0 let motionManger = CMMotionManager() var xAcceleration:CGFloat = 0 override func didMove(to view: SKView) { starfield = SKEmitterNode(fileNamed: "Starfield") starfield.position = CGPoint(x: 0, y: 1472) starfield.advanceSimulationTime(10) self.addChild(starfield) starfield.zPosition = 1 player = SKSpriteNode(imageNamed: "shuttle") player.position = CGPoint(x: self.frame.size.width / 2, y: player.size.height / 2 + 20) self.addChild(player) self.physicsWorld.gravity = CGVector(dx: 0, dy: 0) self.physicsWorld.contactDelegate = self scoreLabel = SKLabelNode(text: "Score: 0") scoreLabel.position = CGPoint(x: 100, y: self.frame.size.height  60) scoreLabel.fontName = "AmericanTypewriterBold" scoreLabel.fontSize = 36 scoreLabel.fontColor = UIColor.white score = 0 self.addChild(scoreLabel) gameTimer = Timer.scheduledTimer(timeInterval: 0.75, target: self, selector: #selector(addAlien), userInfo: nil, repeats: true) motionManger.accelerometerUpdateInterval = 0.2 motionManger.startAccelerometerUpdates(to: OperationQueue.current!) { (data:CMAccelerometerData?, error:Error?) in if let accelerometerData = data { let acceleration = accelerometerData.acceleration self.xAcceleration = CGFloat(acceleration.x) * 0.75 + self.xAcceleration * 0.25 } } } func addAlien () { //Random index between 0 and 2 changing the texture and category bitmask let index = Int.random(in: 0...2) let textureName = alienTextureNames[index] let alien = SKSpriteNode(imageNamed: textureName) //Use Int.random() if you don't want a CGFLoat let xPos = CGFloat.random(in: 0...414) let yPos = frame.size.height + alien.size.height alien.position = CGPoint(x: xPos, y: yPos) alien.physicsBody = SKPhysicsBody(rectangleOf: alien.size) alien.physicsBody?.isDynamic = true alien.physicsBody?.categoryBitMask = alienCategories[index] alien.physicsBody?.contactTestBitMask = photonTorpedoCategory alien.physicsBody?.collisionBitMask = 0 self.addChild(alien) let moveAction = SKAction.moveTo(y: alien.size.height, duration: 6) alien.run(moveAction, completion: { alien.removeFromParent() }) } override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) { fireTorpedo() } func fireTorpedo() { self.run(SKAction.playSoundFileNamed("torpedo.mp3", waitForCompletion: false)) let torpedoNode = SKSpriteNode(imageNamed: "torpedo") torpedoNode.position = player.position torpedoNode.position.y += 5 torpedoNode.physicsBody = SKPhysicsBody(circleOfRadius: torpedoNode.size.width / 2) torpedoNode.physicsBody?.isDynamic = true torpedoNode.physicsBody?.categoryBitMask = photonTorpedoCategory torpedoNode.physicsBody?.contactTestBitMask = alienCategories torpedoNode.physicsBody?.collisionBitMask = 0 torpedoNode.physicsBody?.usesPreciseCollisionDetection = true self.addChild(torpedoNode) let animationDuration:TimeInterval = 0.3 var actionArray = [SKAction]() actionArray.append(SKAction.move(to: CGPoint(x: player.position.x, y: self.frame.size.height + 10), duration: animationDuration)) actionArray.append(SKAction.removeFromParent()) torpedoNode.run(SKAction.sequence(actionArray)) } func didBegin(_ contact: SKPhysicsContact) { var firstBody:SKPhysicsBody var secondBody:SKPhysicsBody if contact.bodyA.categoryBitMask < contact.bodyB.categoryBitMask { firstBody = contact.bodyA secondBody = contact.bodyB }else{ firstBody = contact.bodyB secondBody = contact.bodyA } if firstBody.categoryBitMask == photonTorpedoCategory && secondBody.categoryBitMask == alienCategories[0] { torpedoDidCollideWithAlien(torpedoNode: firstBody.node as! SKSpriteNode, alienNode: secondBody.node as! SKSpriteNode) score += 5 }else if firstBody.categoryBitMask == photonTorpedoCategory && secondBody.categoryBitMask == alienCategories[1] { torpedoDidCollideWithAlien(torpedoNode: firstBody.node as! SKSpriteNode, alienNode: secondBody.node as! SKSpriteNode) score = 5 }else if firstBody.categoryBitMask == photonTorpedoCategory && secondBody.categoryBitMask == alienCategories[2] { torpedoDidCollideWithAlien(torpedoNode: firstBody.node as! SKSpriteNode, alienNode: secondBody.node as! SKSpriteNode) score = 10 } } func torpedoDidCollideWithAlien (torpedoNode:SKSpriteNode, alienNode:SKSpriteNode) { let explosion = SKEmitterNode(fileNamed: "Explosion")! explosion.position = alienNode.position self.addChild(explosion) self.run(SKAction.playSoundFileNamed("explosion.mp3", waitForCompletion: false)) torpedoNode.removeFromParent() alienNode.removeFromParent() self.run(SKAction.wait(forDuration: 2)) { explosion.removeFromParent() } score += 5 }

uint32_t does not name a type  VSCode with STM32 in Windows
I am currently writing code for a project, specifically interfacing with sensors via an STM32 Nucleo F411RE board. I set the pins/peripherals etc. using STM32CubeMX, then generated the code with the Makefile toolchain for programming in Visual Studio Code. Everything is compiling fine, but the IDE/Intellisense for some reason doesn't pick up any use of
uint32_t
; any occurences are redsquiggled, with the error readingvariable "uint32_t" is not a type name
.I have
#include <stdint.h>
at the top, anduint16_t
anduint8_t
are both recognized in the same file. Peeking the definition of those reveals their lines instdint.h
, while the same does not work foruint32_t
. I have attempted the solutions suggested here and here, neither of which worked.I am working in C11 (not c++, which isn't an option for my project) on Windows, here is my
c_cpp_properties.json
file (with all the extra defines cut out for compactness):{ "configurations": [ { "name": "Win32", "includePath": [ "${workspaceFolder}/**" ], "defines": [ "_DEBUG", "UNICODE", "_UNICODE" ], "databaseFilename": "${workspaceFolder}/.vscode/browse.vc.db", "windowsSdkVersion": "10.0.17763.0", "compilerPath": "C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/Hostx64/x64/cl.exe", "cStandard": "c11", "cppStandard": "c++17", "intelliSenseMode": "clangx64", "compileCommands": "${workspaceFolder}/compile_commands.json" } ], "version": 4 }
Here is
<stdint.h>
 all types are clearly declared together, with no apparent condition for uint32_t:#pragma once #define _STDINT #ifndef RC_INVOKED #include <vcruntime.h> typedef signed char int8_t; typedef short int16_t; typedef int int32_t; typedef long long int64_t; typedef unsigned char uint8_t; typedef unsigned short uint16_t; typedef unsigned int uint32_t; typedef unsigned long long uint64_t; typedef signed char int_least8_t; typedef short int_least16_t; typedef int int_least32_t; typedef long long int_least64_t; typedef unsigned char uint_least8_t; typedef unsigned short uint_least16_t; typedef unsigned int uint_least32_t; typedef unsigned long long uint_least64_t; typedef signed char int_fast8_t; typedef int int_fast16_t; typedef int int_fast32_t; typedef long long int_fast64_t; typedef unsigned char uint_fast8_t; typedef unsigned int uint_fast16_t; typedef unsigned int uint_fast32_t; typedef unsigned long long uint_fast64_t; typedef long long intmax_t; typedef unsigned long long uintmax_t; // These macros must exactly match those in the Windows SDK's intsafe.h. #define INT8_MIN (127i8  1) #define INT16_MIN (32767i16  1) #define INT32_MIN (2147483647i32  1) #define INT64_MIN (9223372036854775807i64  1) #define INT8_MAX 127i8 #define INT16_MAX 32767i16 #define INT32_MAX 2147483647i32 #define INT64_MAX 9223372036854775807i64 #define UINT8_MAX 0xffui8 #define UINT16_MAX 0xffffui16 #define UINT32_MAX 0xffffffffui32 #define UINT64_MAX 0xffffffffffffffffui64 #define INT_LEAST8_MIN INT8_MIN #define INT_LEAST16_MIN INT16_MIN #define INT_LEAST32_MIN INT32_MIN #define INT_LEAST64_MIN INT64_MIN #define INT_LEAST8_MAX INT8_MAX #define INT_LEAST16_MAX INT16_MAX #define INT_LEAST32_MAX INT32_MAX #define INT_LEAST64_MAX INT64_MAX #define UINT_LEAST8_MAX UINT8_MAX #define UINT_LEAST16_MAX UINT16_MAX #define UINT_LEAST32_MAX UINT32_MAX #define UINT_LEAST64_MAX UINT64_MAX #define INT_FAST8_MIN INT8_MIN #define INT_FAST16_MIN INT32_MIN #define INT_FAST32_MIN INT32_MIN #define INT_FAST64_MIN INT64_MIN #define INT_FAST8_MAX INT8_MAX #define INT_FAST16_MAX INT32_MAX #define INT_FAST32_MAX INT32_MAX #define INT_FAST64_MAX INT64_MAX #define UINT_FAST8_MAX UINT8_MAX #define UINT_FAST16_MAX UINT32_MAX #define UINT_FAST32_MAX UINT32_MAX #define UINT_FAST64_MAX UINT64_MAX #ifdef _WIN64 #define INTPTR_MIN INT64_MIN #define INTPTR_MAX INT64_MAX #define UINTPTR_MAX UINT64_MAX #else #define INTPTR_MIN INT32_MIN #define INTPTR_MAX INT32_MAX #define UINTPTR_MAX UINT32_MAX #endif #define INTMAX_MIN INT64_MIN #define INTMAX_MAX INT64_MAX #define UINTMAX_MAX UINT64_MAX #define PTRDIFF_MIN INTPTR_MIN #define PTRDIFF_MAX INTPTR_MAX #ifndef SIZE_MAX #define SIZE_MAX UINTPTR_MAX #endif #define SIG_ATOMIC_MIN INT32_MIN #define SIG_ATOMIC_MAX INT32_MAX #define WCHAR_MIN 0x0000 #define WCHAR_MAX 0xffff #define WINT_MIN 0x0000 #define WINT_MAX 0xffff #define INT8_C(x) (x) #define INT16_C(x) (x) #define INT32_C(x) (x) #define INT64_C(x) (x ## LL) #define UINT8_C(x) (x) #define UINT16_C(x) (x) #define UINT32_C(x) (x ## U) #define UINT64_C(x) (x ## ULL) #define INTMAX_C(x) INT64_C(x) #define UINTMAX_C(x) UINT64_C(x) #endif // RC_INVOKED /* * Copyright (c) 19922012 by P.J. Plauger. ALL RIGHTS RESERVED. * Consult your license regarding permissions and restrictions. V6.00:0009 */
Both
uint_fast16_t
anduint_fast32_t
have no problems when I use them, which rules out type being an issue  they are bothunsigned int
, the same asuint32_t
. 
Why `static_cast<int>(uint32_t)` works unexpected?
The following code does what's expected
uint32_t a = 1, b = 2; std::cout << static_cast<int64_t>(a)  b << '\n';
Prints
1
. But if I changeint64_t
toint
everything breaks:std::cout << static_cast<int>(a)  b << '\n';
Prints
4294967295
.Does anybody know what's the trick?