Average case complexity of Algo
What is meant by average case complexity of an algorithm?
 Average of complexity of all possible complexities over different inputs, OR
 Complexity of an input which is average of all possible inputs.
According to me when we say that average case complexity of Quicksort is N log(N)
, I understand that it is for an input which is not skewed towards being already somewhat partially sorted nor being extremely unsorted. This complexity is for a random input, which (according to me) means an input which is average of all possible inputs. Hence the term average complexity.
Kindly correct me where I am wrong. And what average complexity actually means.
See also questions close to this topic

LogStructuredMerge tree key lookup complexity
Im currently studying the LogStructuredMerge tree described by O'Neil et. al. Something is not totally clear to me regarding the worst lookup complexity in an LSM tree. The components that reside on disk space store their data in a Btree right?
As state from the paper:
As a rule, in order to guarantee that all entries in the LSMtree have been examined, it is necessary for an exactmatch find or range find to access each component (C_n)through its index structure.
This indicates to me that lookup through the Cn components residing in disk space worst case is O(n).
But that is only traversing over the components not traversing inside the component over the keyvalue pairs. Since the keyvalue pairs are stored in a Btree which as O(log n) lookup complexity, does this mean that a lookup in a LSM tree is of complexity O(n log n)?

Cover a given range of an array
I'm trying to figure out the algorithm to implement a method and I would need some advice in the regard. I have given an array and range,
A = [1, 15, 30, 40, 50] range = 10
I need to figure out the minimum number of points that would cover all the numbers of the given array.
Cover means that distance you can reach both left and right with the provided range. For example, if I have a point at 23 and range of 5, then I can reach 1823 left side and 2328 right side. The points can cover both the left and the right sides of the given range.
In the provided array the result is 3. We would need one point at 1, another at 15, and 3rd one at 40. I explain it below,
i. The point at 1 will cover till 10 (110) ii. The point at 15 will cover itself (15) iii. The point at 40 will cover left till the 30 and at the right till the 50 (304050)
Please, provide a good reason before you attempt to downvote. Thank you.

Big O complete definitions
I want to clarify some definitions about Big O notation, because it seems that I absolutely messed up.
 The efficiency of the algorithm depends on the machine.
 We need to get algorithm efficiency in machine independent manner.
Algorithm efficiency is decided by two factors: time and space used by this algorithm to execute:
a. Running Time Factor − dependency between algorithm input parameters and the algorithm running time to complete.
b. Space Factor − dependency between algorithm input parameters and the maximum memory space required by the algorithm to complete.
We can not measure
Algorithm Running Time
in seconds because different machines have different configurations. Therefore, someone (I don't know who) decided to use number of processor's atomic instruction as algorithm running time and call it algorithm runtime.It means that algorithm runtime is the number of instructions that processor should perform for algorithm to complete.
It's obvious that number of instructions depends on algorithm input parameters. Therefore, we can say that algorithm runtime is the dependency between input data size of the algorithm and the number of instructions that the processor will perform in order for the algorithm to complete.
So we get a new, more precise definition of algorithm runtime.
But for some reasons someone said: okay, let's call it algorithm runtime complexity instead of simple algorithm runtime.
That was a brilliant idea, but it doesn't work well. Because one day someone struggled with code like this:
void foo(int n) { if (n == 1) return 1; for(int i = 0; i < n; i++) System.out.println(i); }
It becomes obvious that foo function has undefined complexity: O(1) or O(N). Solution was pretty simple. Someone said: ok let's say that we have three types of algorithm runtime complexity:
 Worstcase complexity: O(N)
 Averagecase complexity: ??
 Bestcase complexity: O(1)
And this person also said: ok guys, if we want to evaluate algorithm runtime complexity we need to use only worstcase. And starting from this point algorithm runtime complexity becomes algorithms worstcase runtime complexity.
But I'm not done yet! Another person also says: that a great long name, but I didn't like worstcase part and I want to give this name more math look, so I will call it algorithm upperbound runtime complexity.
So it means that
algorithm upperbound runtime complexity
is the same thing asalgorithms worstcase runtime complexity
.Okay let's say that we have two algorithms.
 The first one has algorithm upper bound runtime complexity
N^3 + N + 2 + log2(N/5)
 The second one has algorithm upper bound runtime complexity
3 * N + N^2 + N + log2(N^2)
Now we need to to compare this two algorithms. Obvious idea that comes to mind: let's calculate running time for
N = 10
. If we do that, we will find out that time is roughly the same. Okay, let's try anotherN
, for example,N = 100
. Even with this we will get roughly the same results again.Solution to this problem is to use Asymptotic Analysis. This guy allow us to understand how both algorithms upper bound runtime complexities behaves on infinity.
 For first algorithm it will be N^3
 For second algorithm it will be N^2
But Asymptotic Analysis is a very large and powerful thing, so in order not to overkill everything, computer scientists decide to invent Big O notation.
Big O notation is a simplified version of Asymptotic analysis, that let us drop the constants, and nondominant terms. So according to Big O notation
algorithm upper bound runtime complexity
will keep only upper bound term.As usual someone said: Big O is a great name, but I want call it Order Of Growth.
Big O notation (or Order of Growth) describes the upper bound of the dependency between input data size of the algorithm and the number of operations that the processor will perform in order for the algorithm to complete.
And one more funny fact: We write O(n) (read:
order n
ororder of n
)Tell me please if I wrong with definitions or my understanding of Big O notation.
One more question: What does exactly
upper bound
means in Big O notation? Worst case of runtime (O(N) in my example) or upper bound of expression that we get after algorithm evaluation (N^3 in my example)? 
Java array.sort(arr) output 0 instead of arr
I am using Java array.sort on a random int generated array.
The result should be a sorted array but instead i get 0's and a few random generated numbers.
Here is my code:
public class test { public static void main(String[] args) { int[] list = new int[10]; Random rand = new Random(); for (int i = 0; i < list.length; i++) { list[i] = rand.nextInt(100); Arrays.sort(list); System.out.println(list[i]); } } }
Expected outpit should be a sorted array of 10 random integers but intead always get first 5 numbers as 0.
Can anyone help?

How do I find all pairs of numbers in an array
I have a sequence of numbers marked 1 through N in an Array. I want to find the all possible pairs of indices (i, j) such that
1 <= i < j <= N
and apply some operation (bitwise XOR) on each of such A[i] and A[j].The Number of elements can be upto 10e5 and the naive approach of using two loops is slow. Is there another way to quickly find all such numbers?
For example : A = [2, 7, 6, 3, 2] All such pairs of of A[i] and A[j] are (2,7), (2,6), (2,3), (2,2) (7,6), (7,3), (7,2) (6,3), (6,3) (3,2)
I will then apply Bitwise XOR to each pair and check if the resultant number can be represented as sum of two primes (not necessarily distinct), also two primes must have same parity (both odd or both even). I need to report how many such pairs are there.
I am using sieve of Atkins for Prime numbers and I can report those two primes if exists in linear time. Since numbers can repeat in the array I am also using memorization to save more time. The only trouble I have is to build that pairs which takes O(n^2) and this makes it very inefficient. Is there any other way for doing this?
Here is the original problem : https://www.codechef.com/SEPT18B/problems/XORIER

Nested List weight sum javascript
I'm trying to work on this problem where we compute Nested weight sum for a given array of numbers.
Given a nested list of integers, return the sum of all integers in the list weighted by their depth.
For example for:
[[1,1],2,[1,1]] ====> solution is 10.
Four 1's at depth 2, one 2 at depth 1.
Here's the code i wrote:
var depthSum = function (nestedList, sum=0, depth=1) { for(let i=0; i<nestedList.length; i++){ let val = nestedList[i]; if (Array.isArray(val)) { return depthSum(val, sum, depth+1); } else { sum += val * depth; } }; return sum; };
I'm trying to work on the converse problem. i.e
Given a nested list of integers, return the sum of all integers in the list weighted by their depth. Where weight is increasing from root to leaf, now the weight is defined from bottom up. i.e., the leaf level integers have weight 1, and the root level integers have the largest weight.
Example: [[1,1],2,[1,1]] ===> Solution is 8.
How can I use the same approach and solve this problem?
(https://leetcode.com/problems/nestedlistweightsumii/description/)