To store what quicksort requires, O (logn) additional memory
I mean I know Wiki says it used for call stack. But what exactly stores there? For what data of algorithm of sorting in a stack the memory is required?
2 answers

i think you might find your answer here
this might not be it, but i believe the stack is used for function calls, and as the algorithm is recursive you would have lots of calls and you need the stack to save the state at each function call.
hope this helps. 
In the most of usual implementations stack stores starting and ending indexes for range that should be sorted later.
Recursive version:
int i = partition(a, l, r) here l,i => quicksort(a, l, i) and here i + 1, r => quicksort(a, i + 1, r)
Version with explicit stack (ifcondition to provide lesser stack depth):
int i = partition(a, l, r) if (i  1 > r  i) s.push(l, i  1) s.push(i + 1, r) else s.push(i + 1, r) s.push(l, i  1)
See also questions close to this topic

Geospatial search with timestamp using Lucene
I have an assignment to find cab drivers close to a given user's location(similar to Grab/Lyft). I have the drivers' location(latitude, longitude) with timestamp. This data is pushed by their mobile to my server every 2 minutes. When a user requests a ride, I need to find the nearest driver available using the drivers' data. I'm trying to use Lucene's GeoSpatial search for this. I've used the drivers data to be indexed and search based on the latitude and longitude of the user. I'm also able to search with a given latitude/longitude combination and get the nearest driver(s) with a distance parameter. But I don't know how I can add another parameter to the search query to specify the timestamp of the driver's data as well. For e.g., I want to search for only drivers who are near a given location at a particular timestamp. Can someone help me with this? Here's the code snippet I'm using:
package com.test.trials; import java.io.BufferedReader; import java.io.FileReader; import java.io.IOException; import java.nio.file.Paths; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.document.Document; import org.apache.lucene.document.StoredField; import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.IndexWriter; import org.apache.lucene.index.IndexWriterConfig; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.TopDocs; import org.apache.lucene.spatial3d.Geo3DPoint; import org.apache.lucene.store.Directory; import org.apache.lucene.store.FSDirectory; import org.apache.lucene.document.LatLonPoint; public class LuceneTrial { private IndexSearcher searcher; public static void main(String[] args) throws IOException { LuceneTrial trial = new LuceneTrial(); trial.generateData(); trial.search(); } private void generateData() throws IOException { Directory dir = FSDirectory.open(Paths.get("C:/temp/Lucene")); Analyzer analyzer = new StandardAnalyzer(); IndexWriterConfig iwc = new IndexWriterConfig(analyzer); iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE); IndexWriter writer = new IndexWriter(dir, iwc); try (BufferedReader br = new BufferedReader(new FileReader( "C:/temp/Lucene/Drivers.csv"))) { String line; String[] fieldNames = new String[] { "Driver ID", "Latitude", "Longitude", "Time" }; while ((line = br.readLine()) != null) { // process the line. String[] tags = line.split(","); Document doc = new Document(); for (int i = 0; i < fieldNames.length; i++) doc.add(new StoredField(fieldNames[i], tags[i])); // Add a latlon point to index try { doc.add(new LatLonPoint("latlon", Double .parseDouble(tags[1]), Double.parseDouble(tags[2]))); Geo3DPoint point = new Geo3DPoint("geo3d", Double.parseDouble(tags[1]), Double.parseDouble(tags[2])); doc.add(point); } catch (Exception e) { System.out.println("Skipped: " + line); } writer.addDocument(doc); } } searcher = new IndexSearcher(DirectoryReader.open(writer)); } public void search() throws IOException { System.out .println("\nLatLonQuery around given point, 10km radius "); TopDocs docs = searcher.search(LatLonPoint.newDistanceQuery("latlon", 6.9270790, 79.8612430, 10 * 1000), 20); for (ScoreDoc scoreDoc : docs.scoreDocs) { Document doc = searcher.doc(scoreDoc.doc); System.out.println(doc); } } }
And Here's the sample data I'm using:
Driver ID Latitude Longitude Time 1 6.081689 145.391881 7:01:17 2 5.207083 145.7887 8:32:40 3 5.826789 144.295861 8:40:49 4 6.569828 146.726242 8:57:33 5 9.443383 147.22005 6:14:26 6 3.583828 143.669186 8:13:35 7 61.160517 45.425978 8:58:24 8 64.190922 51.678064 7:42:16 9 67.016969 50.689325 6:52:20 10 76.531203 68.703161 6:08:21 11 65.659994 18.072703 7:57:45 12 65.283333 14.401389 7:32:23 13 64.295556 15.227222 8:20:26 14 65.952328 17.425978 8:51:34 15 66.058056 23.135278 8:33:43 16 63.985 22.605556 7:39:35 17 65.555833 23.965 7:20:54 18 64.13 21.940556 7:37:48 19 66.133333 18.916667 6:46:36 20 63.424303 20.278875 7:15:12 21 46.485001 84.509445 6:14:15 22 50.056389 97.0325 7:12:15 23 44.639721 63.499444 6:15:31 24 51.391944 56.083056 7:15:50 25 49.082222 125.7725 6:52:22
Can someone show me how I can search for the drivers based on two attributes  distance and time?

Dynamic Programming set sum
This is a exam question which I had no idea how to solve, given a set S = {2, 9, 14} and integer N=22914. The integer N can be obtained by creating a string by concatenating the digits 2,2,9,14. Integer N = 16 can be obtained by the string “2+14”. Integer N = 48 can be obtain using the string “8+2*9+22”. Write pseudo code using dynamic programming to output the number of digits used from set if a solution exists otherwise return 1 for example n = 1, returns 1, n = 22914 returns 4.

adjacency list of a multigraph?
Is there a canonical adjacency list representation of a multigraph?
I came across this SO question on Algorithm to convert adjacency list of a multigraph to an equivalent undirected graph, and realized that I can't recall or find the definition of the adjacency list for a multigraph (a graph with parallel edges and self loops etc.). For example, the wikipedia page for adjacency list shows nothing about "multigraph" or "parallel" (as of Feb 2018).
I can imagine ways of adding additional information in adjacency list to make it work for multigraphs, such as including the edge ID in the adjacency list. But I just wanted to ask first if there is a canonical way to represent adjacency list for multigraphs.
Related:

Failed to optimize a version of QuickSort with one recursive call
I'm trying to implement the QuickSort but with just one recursive call in its code. many Tests are applied to the program. There is a file where it compares it to the real qsort function in order to verify whether the new version of qsort works or not. So it does a lot of tests from 8 integers for the array to 2048 integers.
So when I execute the program, it's supposed to write "OK" at every succesful test like this:
Test with 8 integers OK Test with 16 integers OK Test with 32 integers OK Test with 64 integers OK Test with 128 integers OK Test with 256 integers OK Test with 512 integers OK
The code is shown below
The code to find the median of three
void median(int t[], int n){ int m = ((n  1) / 2); if (t[0] > t[n  1]) { /* t[0] > t[n  1] */ if (t[m] > t[0]); /* t[m] > t[0] > t[n  1] */ else if (t[n  1] > t[m]) /* t[0] > t[n  1] > t[m] */ swap(t[0], t[n1]); else /* t[0] >= t[m] >= t[n  1] */ swap(t[0], t[m]); } else { /* t[0] <= t[n  1] */ if (t[m] < t[0]) ; /* t[m] < t[0] <= t[n  1] */ else if (t[n  1] < t[m]) /* t[0] <= t[n  1] < t[m] */ swap(t[0], t[n1]); else /* t[0] <= t[m] <= t[n + 1] */ swap(t[0], t[m]); } }
The function for partitioning
int partition( int t[], int n) { int l = 0, r = n, tmp; median(t, r); while(1) { do ++l; while(l < n && t[l] < t[0]); // l <= n1 do r; while(t[r] > t[0]); // r >= 0 if( l >= r ) break; tmp = t[l]; t[l] = t[r]; t[r] = tmp; } tmp = t[0]; t[0] = t[r]; t[r] = tmp; return r; }
And the code of QuickSort
void version2(int t[], int size) { int m, l = 0, r = size ; while((r  l) > 1) { m = partition( t + l, r); if( (m  l) < (r  m)){ version2(t + l, m); l = m + 1; } else{ version2(t + m + 1, r); r = m; } } }
When I try to execute it I get only this
Test with 8 integers
As it was in an infinite loop

CLRS Quicksort  Not working correctly
I am trying to implement the Quicksort algorithm as given in the classic CLRS book. I have implemented it exactly linebyline in my C# program. But the output is unsorted. The following is my code in it's entirety, along with the output:
using System; namespace clrs { class MainClass { public static void Main (string[] args) { int[] numbers = { 2, 1, 4 }; Console.WriteLine("unsorted: " + string.Join(",", numbers)); quicksort (numbers, 0, numbers.Length1); Console.WriteLine("sorted: " + string.Join(",", numbers)); Console.WriteLine(""); numbers = new int[]{ 7, 2, 1, 6, 1 }; Console.WriteLine("unsorted: " + string.Join(",", numbers)); quicksort (numbers, 0, numbers.Length1); Console.WriteLine("sorted: " + string.Join(",", numbers)); Console.WriteLine(""); numbers = new int[]{ 2,8,7,1,3,5,6,4 }; Console.WriteLine("unsorted: " + string.Join(",", numbers)); quicksort (numbers, 0, numbers.Length1); Console.WriteLine("sorted: " + string.Join(",", numbers)); Console.WriteLine(""); numbers = new int[]{ 2, 33, 6, 9, 8, 7, 1, 2, 5, 4, 7 }; Console.WriteLine("unsorted: " + string.Join(",", numbers)); quicksort (numbers, 0, numbers.Length1); Console.WriteLine("sorted: " + string.Join(",", numbers)); Console.WriteLine(""); } public static void quicksort(int[] a, int p, int r){ int q; if (p < r){ q = partition (a, p, r); quicksort(a, p, q1); quicksort(a, q+1, r); } } public static int partition(int[] a, int p, int r){ int x = a[r]; int i = p  1; for (int j=p; j<r1; j++){ if(a[j] <= x){ i = i + 1; int temp = a[i]; a[i] = a[j]; a[j] = temp; } } int temp1 = a[i+1]; a[i+1] = a[r]; a[r] = temp1; return (i+1); } } }
The output is:
unsorted: 2,1,4 sorted: 2,4,1 unsorted: 7,2,1,6,1 sorted: 1,1,2,7,6 unsorted: 2,8,7,1,3,5,6,4 sorted: 2,3,1,4,5,7,8,6 unsorted: 2,33,6,9,8,7,1,2,5,4,7 sorted: 1,2,5,6,7,2,7,8,9,33,4
I have implemented quicksort exactly as it is given in CLRS, 3rd edition. My code compiles but the output is not completely sorted.
What am I doing wrong? Or is there a bug in the CLRS psuedocode (highly unlikely)?
Please HELP!

Best sorting strategy: Combining quicksort followed by insertion sot followed by mergesort
To rephrase, can Timsort benefit by initial few partitions of Quicksort?
It seems like Quicksort is top down and creates disjoint partitions. But it has some variability & can degrade sometimes.
Insertion sort works best at smaller size near 832 (2^3 to 2^5).
Mergesort works good with already sorted but has memory overhead of copying.
To get the best of all the worlds, e.g. to sort 100K records (2^17 or so), I would think, it is better to combine all 3 sorts.
Phase 1. Quicksort: (Top Down  reduce large disorde) start top down u pto some level (log2N 12) =7 levels of partitions stopping partitioning below 4K size. This would divide a large dataset into manageable sizes. In the worst case we may see only 1 partition but all is not lost. May be we can stop partitioning after 2 consecutive steps of worst case, instead of going all the steps that we planned for.
But this should reduce the work in Mergesort phase by restricting partition size.
Phase 2. Insertionsort: (Bottom up ) We start insertion sort in batches of 2^3 to 2^5 inside each partition and stop.
Phase 3. Mergesort:(works best with presorted data) start merging bottom up sorted data inside each partition. Since each partition is smaller than original dataset, it would require less memory
Is it used like this anywhere? Is something wrong? One reason is Quicksort would cause to lose Stability of the sort. But will the performance benefit or not from this approach?