Fill values X from values Y
I'm trying to create an algorithm to fill some values X (e.g. 10,000, 50,000, 100,000) from another set of values Y (e.g. 2500, 5000, 10,000, 42,500, 27,500).
Aim: To get the highest total filling of values X.
Rules: Can't repeat any value Y. Any value X must be entirely filled to count.
I've tried to knapsack this but it doesn't work very well because it creates huge value arrays. Any ideas?
Edit for more clarity:
An array of values X (ValueX), and an array of values Y (ValueY). Fill each individual value from ValueX, using any combination of values from ValueY. Once a value from ValueY has been used, it cannot be reused.
Example:
Fill ValueX[0] (10,000)
You could use ValueY[2] (10,000) and that would fill it completely. However now ValueY[2] cannot be reused for future filling of any ValueX.
If you then tried to fill ValueX[1] (50,000), you could use ValueY[3] (42,500), ValueY[1] (5000) and ValueY[0] (2500), to get a total of 50,000. Now those values(3, 1, 0) from ValueY have also been used.
See also questions close to this topic

Incorrect output for this code,
i am brushing up on some programming this summer, i was doing this question as follows. Write a method collapse that takes a stack of integers as a parameter and that collapses it by replacing each successive pair of integers with the sum of the pair.
Basically my method is returning the reversed order of the stack, except in the case of an odd number of numbers in the stack, i except it to at least return the sum for me, regardless of order. I am supposed to use only 1 queue My question is where am i going wrong here, why cant i sum my 2 pops. public static Stack collapse(Stack sO) {
Queue<Integer> qN = new LinkedList<Integer>(); int x1 = 0; int x2 = 0; while(!sO.empty()) { qN.add(sO.pop()); } while(!qN.isEmpty()) { int sum = 0; x1 = qN.remove(); if(sO.empty()){ qN.add(x1); break; } else{x2 = qN.remove(); sum = x1 + x2; qN.add(sum);} } while(!qN.isEmpty()){ sO.push(qN.remove()); } return sO;
bottom [7, 2, 8, 9, 4, 13, 7, 1, 9, 10] top The first pair should be collapsed into 9 (7 + 2), the second pair should be collapsed into 17 (8 + 9), the third pair should be collapsed into 17 (4 + 13) and so on to yield:
bottom [9, 17, 17, 8, 19] top

JUnit 4.xx (Java) Trouble with Mockito when Trying to Get Line Coverage
The following is my main code.
public class KafkaConsumerForTests { private ConsumerRecords<String, String> records; private GenericKafkaConsumer<String, String> consumer; @Override public void run() { try { while (true) { LOGGER.logp(Level.INFO, CLASS_NAME, "run()", "Attempting to Poll"); records = consumer.poll(10000); int numOfRecords = records.count(); **if (numOfRecords == 0) {** // I want to get line coverage for this branch LOGGER.logp(Level.INFO, CLASS_NAME, "run()", "No Response. Invalid Topic"); break; } **else if(numOfRecords > 0) {** // I want to get line coverage for this branch. LOGGER.logp(Level.INFO, CLASS_NAME, "run()", "Response Received"); } } } catch (WakeupException e) { consumer.close(); } } }
As you can see I want to get line coverage for the following branches to test if it's correctly logging. I tried mocking out instances of records.count(); You can see my code for the test case below.
@Test public void testRunWithZeroRecords() throws IOException { KafkaConsumerForTests consumerThread3 = spy(new KafkaConsumerForTests("topic_pleasestuff", "lmao")); ConsumerRecords<String, String> mock2 = mock(ConsumerRecords.class); consumerThread3.records = mock2; when(mock2.count()).thenReturn(9); consumerThread3.run(); //verify(mock2, times(1)).count(); }
No matter what I do, I'm not hitting:
else if(numOfRecords > 0)
I am returning a number greater than 0. It's as if records.count(); isn't even being executed in the mock. I apologize for any convention or StackOverflow question syntax errors. I'm new to the Java community.

ext size Epson TMH5000II
I'm trying to reduce the size of the text, according to the manual i need to use the ESC ! 1 (https://reference.epsonbiz.com/modules/ref_escpos/index.php?content_id=23) but i dont know how to pass it to java code, i try define a bite and use decimal, hex and ASCII but doesnt work.
public class JavaPrinter implements PrinterApi { private Logger logger = LogManager.getLogger(JavaPrinter.class); PrintService printService; boolean isFile = false; String printerName = null; PrintStream prnStr; private PipedInputStream pipe; PipedOutputStream dataOutput; Doc mydoc; byte[] widthNormal = { 0x1b, '!', '1' }; @Override public void setNormal() { if (isFile) return; try { prnStr.write(widthNormal); } catch (IOException e) { throw new DeviceServerRuntimeException("", e); } }
Above is part of the code i write, i appreciate any advice, help! THX

How possible everything be transformed into a unique `md5`?
Have unlimited data in the world, but the capacity of the
md5
algorithm limit range (16^32). So how possible everything be transformed into a uniquemd5
?!  Complexity differences between Countinuous Firefly Algorithm and Binary Firefly Algorithm

How to best build a persistent binary tree from a sorted stream
For a side project I wanted a simple way to generate a persistent binary search tree from a sorted stream. After some cursory searching I was only able to find descriptions of techniques that involved starting a sorted array where you can access any element by index. I ended up writing something that works but I figured this is well trodden territory and a canonical example is probably documented somewhere (and probably has a name).
The make shift code I made is included just for clarity. (It's also short)
object TreeFromStream { sealed trait ImmutableTree[T] { def height: Int } case class ImmutableTreeNode[T]( value: T, left: ImmutableTree[T], right: ImmutableTree[T] ) extends ImmutableTree[T] { lazy val height = left.height + 1 } case class NilTree[T]() extends ImmutableTree[T] { def height = 0 } @tailrec def treeFromStream[T]( stream: Stream[T], tree: ImmutableTree[T] = NilTree[T](), ancestors: List[ImmutableTreeNode[T]] = Nil ): ImmutableTree[T] = { (stream, ancestors) match { case (Stream.Empty, _) => ancestors.foldLeft(tree) { case(right, root) => root.copy(right=right) } case (_, ancestor :: nextAncestors) if ancestor.left.height == tree.height => treeFromStream(stream, ancestor.copy(right=tree), nextAncestors) case (next #:: rest, _) => treeFromStream( rest, NilTree(), ImmutableTreeNode(next, tree, NilTree()) :: ancestors ) } } }

Count combinations between columns dynamically in r
I've got this dataframe (d)
V1 V2 V3 1 A A A 2 A A C 3 A B A 4 A B A 5 A A A 6 A A C 7 A B A 8 A B A 9 A A A 10 B A C 11 B B A 12 B B A 13 B A A 14 C A C 15 A B A
And I've got this code, which calculate, for every triple, the occurrences of the combination between two columns given the value in the first column. I.e.: How many times I get A in V1 and A in V2 when V3 is A?
library("gtools") library("tidyverse") all_combs < expand.grid(unique(unlist(d)),unique(unlist(d)),unique(unlist(d))) %>% rowwise() %>% mutate_all(as.character) %>% mutate(two=paste(sort(c(Var1,Var2)), collapse="")) %>% ungroup() %>% unite(all, two, Var3) %>% select(all) %>% distinct() combn(1:ncol(d),2, simplify = F) %>% set_names(map(.,~paste(., collapse = "&"))) %>% map(~select(d,a =.[1], b=.[2], everything()) %>% rowwise() %>% mutate_all(as.character) %>% mutate(two=paste(sort(c(a, b)), collapse="")) %>% select(two, contains("V"), a,b) %>% ungroup() %>% unite(all, two, contains("V")) %>% count(all)) %>% map(~right_join(.,all_combs, by="all")) %>% bind_rows(.id = "id") %>% mutate(n=ifelse(is.na(n), 0, n)) %>% spread(id, n)
If d is a data frame with 3 columns it works fine, and it returns the frequencies of the combinations of the values in d
# A tibble: 18 x 4 all `1&2` `1&3` `2&3` <chr> <dbl> <dbl> <dbl> 1 AA_A 3 3 3 2 AA_B 0 5 1 3 AA_C 2 0 0 4 AB_A 6 1 5 5 AB_B 0 2 2 6 AB_C 1 0 0 7 AC_A 0 2 2 8 AC_B 0 0 1 9 AC_C 1 0 1 10 BB_A 2 0 0 11 BB_B 0 0 0 12 BB_C 0 0 0 13 BC_A 0 1 0 14 BC_B 0 0 0 15 BC_C 0 0 0 16 CC_A 0 1 0 17 CC_B 0 0 0 18 CC_C 0 0 0
but if I increase the number of the columns it doesn't work properly. If I add a column to my dataframe like this
V1 V2 V3 V4 1 A A A A 2 A A C B 3 A B A C 4 A B A A 5 A A A B 6 A A C A 7 A B A C 8 A B A A 9 A A A A 10 B A C A 11 B B A A 12 B B A A 13 B A A B 14 C A C B 15 A B A B
It computes correctly all the combinations, but it doesn't count the occurencies.
# A tibble: 18 x 7 all `1&2` `1&3` `1&4` `2&3` `2&4` `3&4` <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 AA_A 0 0 0 0 0 0 2 AA_B 0 0 0 0 0 0 3 AA_C 0 0 0 0 0 0 4 AB_A 0 0 0 0 0 0 5 AB_B 0 0 0 0 0 0 6 AB_C 0 0 0 0 0 0 7 AC_A 0 0 0 0 0 0 8 AC_B 0 0 0 0 0 0 9 AC_C 0 0 0 0 0 0 10 BB_A 0 0 0 0 0 0 11 BB_B 0 0 0 0 0 0 12 BB_C 0 0 0 0 0 0 13 BC_A 0 0 0 0 0 0 14 BC_B 0 0 0 0 0 0 15 BC_C 0 0 0 0 0 0 16 CC_A 0 0 0 0 0 0 17 CC_B 0 0 0 0 0 0 18 CC_C 0 0 0 0 0 0
Is there a way to achieve this dynamically?

Find K arrays that sum up to a given array with a certain accuracy
Let's say I have set containing thousands of arrays (let's fix it to 5000 of arrays) of a fixed size (size = 8) with non negative values. And I'm given another array of the same size with non negative values (Input Array). My task is to select some subset of arrays, with the condition that if I sum them together (summation of vectors) I would get the resultant array which is very close to a given Input Array with the desired accuracy (+m).
For example if the desired result (input array) is (3, 2, 5) and accuracy = 2 Then of course the best set would be the one that would sum up to exactly (3,2,5) but also any solution of the following form would be ok (3 + m, 2 + m, 5 + m).
The question is what could be the right algorithmic approach here? It is similar to multi dimensional sack problem, but there is no cost optimization section in my task.
At least one solution is required which meets the constraints. But several would be better, so that it would be possible to have a choice.

matching quantity of objects in a list python
I've been trying to find a pythonic way to match a list of object based on their quantity attributes. for some reason I have a terrible nested if/elif solution and it looks terrible.
assuming that you have a simple object that look like this:
class Obj: def __init__(self, name, quantity): self.name = name self.quantity = quantity def __repr__(self): return 'Obj({}, {})'.format(self.name, self.quantity)
and now you have a list containing those object where the order is important such as :
container = [Obj('first', 20), Obj('second', 25), Obj('third', 20)]
the goal is to match the first object with the first opposite quantity object and create a new object with the residual quantity and then match this one recursively to get the final list that look like this :
[[Obj('first', 20), Obj('second', 20)], [Obj('second', 5), Obj('third', 5)], [Obj('third', 15)]]
another example would be:
container = [Obj('first', 20), Obj('second', 15), Obj('third', 40)]
result
[[Obj('first', 20), Obj('third', 20)], [Obj('second', 15), Obj('third', 15)], Obj('third', 5)]
my nested loop with if/elif combination is not even worth showing. I would appreciate any help thanks!

Academic Exercise (C++)  how to print a char pointer in the main function
I am trying to print a char pointer holding a string i have created using strcpy functions. the problem is when i am trying to print the result on the main function i can see symbols but not the characters, i use the debugger and saw that the function return the right data to the main. Hope you can help.
Here is the code:
#include<iostream> #include<stdio.h> #include<string.h> #define _CRT_SECURE_NO_WARNINGS using namespace std; char *concatChar(char *str_array[], int size, int index); #define N 3 #define SIZE 4 void main() { char *arr[N]; int index; char *result; //user input for the strings: cout << "Please enter strings (up to 3 strings): " << endl; for (int i = 0; i < N; i++) { arr[i] = new char[N]; cin.getline(arr[i] , SIZE, '\n'); } cout << arr[0] << arr[1] << arr[2]; //user input for a index cout << "Please enter a digit for the index: " << endl; cin >> index; //calling the function  print the result result = concatChar(arr, N, index); cout << result; //release the memory for (int i = 0; i < N; i++) { arr[i] = NULL; delete[] arr[i]; } system("pause"); } char *concatChar(char *str_array[], int size, int index) { #define SIZE2 4 char arr_fin[SIZE2]; char *tmp=arr_fin; int min = 100 ; //check the length of the strings for (int i = 0; i < size; i++) { if (min > strlen(str_array[i])) { min = strlen(str_array[i]); } } min += 1; for (int x = 0; x < size; x++) { char *tmp2 = str_array[x]; tmp2 += index; strcpy(&(tmp[x]), tmp2); tmp[x+1] = '\0'; } return tmp; }

Dynamic Programming Primitive Calculator
The full explanation of the problem is herehttp://imgur.com/a/UiE7L . I've written the code, but it is showing segmentation error which I'm not able to solve. As per the logic of the program, I am saving the minimum number of operations needed to reach number n on the nth position of the array. I intend to go by this logic.
#include <iostream> #include <vector> #include <algorithm> #include <stdio.h> #include <stdlib.h> long long f(long long n, vector <long long> arr) { arr[1]=0; arr.push_back(n); long long ans=0, ret=0; if (n==1) { return (0); } ans= f(n1, arr) + 1; if (n%2==0) { ret= f(n/2, arr) + 1; if (ret<ans) { ans=ret; std::cout<<ans<<'\n'; } } if (n%3==0) { ret= f(n/3, arr) + 1; if (ret<ans) { ans=ret; std::cout<<ans<<'\n'; } } arr[n]=ans; return arr[n]; } int main() { long long n; std::cin >> n; std::vector<long long> arr; std::cout<<f(n, arr); return 0; }

How to find the different sets of d ‘edits’
I am having some problem understanding some question.
The question goes like this
Let d be the levenshtein distance between STRONGEST and TRAINERS (d=6)
(a). How many different sets of d ‘edits’ (insertions, deletions, or substitutions) are there that will change the string STRONGEST into the string TRAINERS?
So I was given a solution like this:
So the total sets of edits seems to be 8 but I'm not quite sure how I could derive that using this distance matrix. Can anyone clarify this for me? why is there a circled number in that certain position?

Javascript smart filtering array of objects
i have an array of objects that has a key , value pair like below .
players=[ { id : 1, name : "player1", value : 5.6, position : "Goalkeeper" },{ id : 1, name : "player1", value : 7.7, position : "Defender" },{ id : 1, name : "player2", value : 6.1, position : "Midfielder" },{ id : 1, name : "player1", value : 7.2, position : "Forward" },.....n ]
What i want to achieve is autoselect 15 players where goalkeepers should be 2 , 5 defenders , 5 midfielders and 3 forwards from array of 700 players so that their total value is close to or equal to 100 . Any help would be appreciated :)

Split a list of numbers into chunks with almost equal chunksums
Python program to Split a list of numbers into
n
chunks such that the chunks have ( close to ) equal sums. The order doesn't matter.Example1 : list = [1,3,2,4] , chunks = 2 should return [1,4] and [3,2] ( sum is 5 on both chunks)
Example2 : list = [5,7,8] , chunks = 2 should return [12] and [8] ( least diff between both chunks)
Example3 : list = [2,2,3,3,4,6] , chunks = 4 should return [2,3], [2,3], [4] and [6] (5,5,4,6)

Knapsack Variationish
I was given this question and this is my first try to figure out dynamic programming.
I am given the solution F(x) = 1 + F(x  a[i]);
The question is:
Given an array of numbers (a) find the combination of those numbers which add to a given sum. That is, say whether it is possible to get that sum and what is the minimum number of numbers were used to get the sum. Eg you have the numbers 1,3,4 and the sum is 13, the minimum number is 4 (4 + 4 + 4 + 1). You CAN use a number more than one time.
How the hell do you write this?? I am extremely confused...