Multilevel groupby subpopulation percentages
Let's consider the following dataframe:
df = {'Location': ['A','A','B','B','C','C','A','C','A'],
'Gender'['M','M','F','M','M','F','M','M','M'],
'Edu'['N','N','Y','Y','Y','N','Y','Y','Y'],
'Access1': [1,0,1,0,1,0,1,1,1], 'Access2': [1,1,1,0,0,1,0,0,1] }
df = pd.DataFrame(data=d, dtype=np.int8)
Output from dataframe:
Access1 Access2 Edu Gender Location
0 1 1 N M A
1 0 1 N M A
2 1 1 Y F B
3 0 0 Y M B
4 1 0 Y M C
5 0 1 N F C
6 1 0 Y M A
7 1 0 Y M C
8 1 1 Y M A
Then I am using groupby to analyse the frequencies in df
D0=df.groupby(['Location','Gender','Edu']).sum()
((D0/ D0.groupby(level = [0]).transform(sum))*100).round(3).astype(str) + '%'
Output:
Access1 Access2
Location Gender Edu
A M N 33.333% 66.667%
Y 66.667% 33.333%
B F Y 100.0% 100.0%
M Y 0.0% 0.0%
C F N 0.0% 100.0%
M Y 100.0% 0.0%
From this output, I infer that 33.3% of uneducated men in location A with Access to service 1 (=Access1) is the result of considering 3 people in location A having access to service 1, of which 1 uneducated man has access to it (=1/3).
Yet, wish to get a different output. I would like to consider a total of 4 men in location A as my 100%. 50% of this group of men are uneducated. Out of that 50% of uneducated men, 25% have access to service 1. So, the percentage I would like to see in the table is 25% (total of uneducated men in area A accessing service 1). Is groupby the right way to get there, and what would be the best way to measure the % of Access to service 1 while considering a disaggregation from the total population of reference per location?
1 answer

I believe need divide
D0
by first level of MultiIndex mapped bya
Series:D0=df.groupby(['Location','Gender','Edu']).sum() a = df['Location'].value_counts() #alternative #a = df.groupby(['Location']).size() print (a) A 4 C 3 B 2 Name: Location, dtype: int64 df1 = D0.div(D0.index.get_level_values(0).map(a.get), axis=0) print (df1) Access1 Access2 Location Gender Edu A M N 0.250000 0.500000 Y 0.500000 0.250000 B F Y 0.500000 0.500000 M Y 0.000000 0.000000 C F N 0.000000 0.333333 M Y 0.666667 0.000000
Detail:
print (D0.index.get_level_values(0).map(a.get)) Int64Index([4, 4, 2, 2, 3, 3], dtype='int64', name='Location')
See also questions close to this topic

Date Quarters in 'Series' object python
I'm trying to count the value in each date in order to count witch quarter is high frequency I try these methods not working so far... please help :) thanks. Note: stime is 'Series' object
stime=df['timestamp'] #print (df['timestamp'].filter(like='08', axis=0) #stime.filter(like='20180718') #stime.between_date('20180101','20180201', include_start=True, include_end=True) #stime.month

Converting a multiple datatype dataframe to integer coded dataframe in python using pandas
I have a dataframe like this in python 
INSTRUMENT_TYPE_CD RISK_START_DT ... FIN_POS_IND PL_FINAL_IND 0 Physical Index 01032017 00:00 ... 0 No 1 Fin Basis Swap 01092018 00:00 ... 0 No 2 Physical Index 01092017 00:00 ... 0 No 3 Physical Index 01122016 00:00 ... 0 No 4 Fin Basis Swap 01022018 00:00 ... 0 No
as you can see, the values of elements in the columns are repetitive and generally string. I want to convert this dataframe into a integer coded dataframe that maps each unique string in a column to some unique integer/number.
So far I have come up with this (normalise method) but it doesn't work.
normalise(dataframe) def normalise(dataframe): for column in dataframe: dataframe[column] = dataframe.apply(unique_code_mapper(dataframe[column])) return dataframe def unique_code_mapper(column): unique_array = [] for val in column: if val in unique_array: column.loc[val] = unique_array.index(val) else: unique_array.append(val) column.loc[val] = unique_array.index(val) return column
It returns the following error:
TypeError: ("'Series' object is not callable", 'occurred at index INSTRUMENT_TYPE_CD')

How to add constant string in each elements of list in python?
I want to add a constant value as prefix in each elements of list. I want to do something similar to this post . But above answers are using for loop. I want to aviod for loops in my program.
My objective is I have to create a list it values should be
"unknown contact number 0", "unknown contact number 1","unknown contact number 2"..."unknown contact number n".
hereunknown contact number
is my prefix. I want to add this element in my list.So far I tried this,
x=pd.DataFrame(index=range(val)) print ('unknown contact number '+x.index.astype(str)).values.tolist()
My question is Am I adding more complex to my code to avoid for loops? or any other better approaches to solve this problem?
Thanks in advance.

R: Frequency of all column combinations
Problem description
I have a list of strings of equal size like this:
example.list < c('BBCD','ABBC','ADDB','ACBB')
Then I want to obtain the frequency of occurence of specific letters at specific positions. First I convert this to a matrix:
A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 C3 D3 A4 B4 C4 D4 [1,] 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 [2,] 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 [3,] 1 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0 [4,] 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 [5,] 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1
Now I want to obtain the frequency of each column combination. Some examples:
A1 : B2 = 2 A1 : B3 = 3 B1 : B2 = 1 .. etc

R: Letter combination frequency from strings
Problem description
I have a list of equal length strings like so:
example.list < c('ABCD','ABBC','ADDB','ACBB')
Then I want to obtain the frequency of occurence of specific letters at specific positions. First I convert this to a matrix:
A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 C3 D3 A4 B4 C4 D4 [1,] 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 [2,] 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 [3,] 1 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0 [4,] 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0
Then I can calculate the distances:
dist(t(output), method = 'binary')
This will produce:
A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 C3 D3 A4 B4 C4 B1 1.0000000 C1 1.0000000 0.0000000 D1 1.0000000 0.0000000 0.0000000 A2 1.0000000 0.0000000 0.0000000 0.0000000 B2 0.5000000 1.0000000 1.0000000 1.0000000 1.0000000 C2 0.7500000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 D2 0.7500000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 A3 1.0000000 0.0000000 0.0000000 0.0000000 0.0000000 1.0000000 1.0000000 1.0000000 B3 0.5000000 1.0000000 1.0000000 1.0000000 1.0000000 0.6666667 0.5000000 1.0000000 1.0000000 C3 0.7500000 1.0000000 1.0000000 1.0000000 1.0000000 0.5000000 1.0000000 1.0000000 1.0000000 1.0000000 D3 0.7500000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 0.0000000 1.0000000 1.0000000 1.0000000 A4 1.0000000 0.0000000 0.0000000 0.0000000 0.0000000 1.0000000 1.0000000 1.0000000 0.0000000 1.0000000 1.0000000 1.0000000 B4 0.5000000 1.0000000 1.0000000 1.0000000 1.0000000 1.0000000 0.5000000 0.5000000 1.0000000 0.6666667 1.0000000 0.5000000 1.0000000 C4 0.7500000 1.0000000 1.0000000 1.0000000 1.0000000 0.5000000 1.0000000 1.0000000 1.0000000 0.5000000 1.0000000 1.0000000 1.0000000 1.0000000 D4 0.7500000 1.0000000 1.0000000 1.0000000 1.0000000 0.5000000 1.0000000 1.0000000 1.0000000 1.0000000 0.0000000 1.0000000 1.0000000 1.0000000 1.0000000
I want to obtain a similar table but then for the frequency instead of the distance. Note for example that this indirectly tells me the frequency as the distance decreases by 2 (so e.g.
0.5
for A1 to B2) then the frequency was 2. This will however only work for A1 compared to all other columns. 
NA variables in dplyr summary r
I am trying to create a table, which includes relative frequencies (
counts
) of variables taken from two groups (A
andB
) that fall within pregiven temporalintervals
. My problem is that if a row starts with 0 seconds (seestart_sec
), the variable does not fall within the 05 secondsinterval
but is marked asNA
(see Output). My wish is to include these cases within the abovementioned interval.This is a dummy example:
Variables
group < c("A","A","A","A","A","A","B","B","B") person < c("p1","p1","p1","p3","p2","p2","p1","p1","p2") start_sec < c(0,10.7,11.8,3.9,7.4,12.1,0,3.3,0) dur_sec < c(7.1,8.2,9.3,10.4,11.5,12.6,13.7,14.8,15.9)
Data frame
df < data.frame(group,person,start_sec,dur_sec) df
Pipeline
df %>% group_by(group,person, interval=cut(start_sec, breaks=c(0,5,10,15))) %>% summarise(counts= n(),sum_dur_sec=sum(dur_sec))
Output (so far)
Besides the variables starting with 0 seconds being assigned to the 05 interval, I would like to have two additional columns:
counts_change_perc
: I would like to include a percentage change of counts between consecutive intervals. In particular, how the sum of counts in interval 05 changed to the sum of counts in the following 510 second interval?dur_s_change_perc
: the same drop in percentages should be included in the second column this time to take the percentage drop of taking not counts but thesum_sec_dur
.
Thank you in advance for all comments and feedback!

Edit Pie of Pie Chart in PowerPoint
I'd like to use the "pie of pie" chart in PowerPoint, to show the percentage of different categories of one element in the main pie. How can I select data for the 2nd (minor) pie chart?
When I press right click > select data, I can only edit the data of the main pie.
In the picture below, I want to have the 2nd pie to show subcategories of the orange slice in the original chart.

How to calculate the Final Total sum and calculate percentages
Below I have a query where I have the total 1st language spoken by students in each language
CTE
WITH Lang AS ( SELECT language, studentcount, SUM(Studentcount) AS total FROM (SELECT l.longtext AS language, COUNT(distinct s.studentnr) AS Studentcount FROM student s JOIN pupil p on p.id = s.pupilid JOIN pupillanguage pl on pl.personid = p.id JOIN language l on l.id = pl.languageid GROUP BY l.longtext ORDER BY Studentcount DESC ) t GROUP BY language, Studentcount )
Query
SELECT initcap(language), Studentcount, total FROM Lang UNION ALL SELECT cast(count(language) as varchar(6)) ' Languages', null, null FROM Lang
Now I have 1 major issue which is assigning a TOTAL SUM value of students. I need this so I can calculate the percentage of numbers of students / total students in a column. However obviously my total value is not giving me what I need.
Output
languages students total  French 734 734 Afrikaans 93 93 Greek 117 117 German 55 55 Armenian 160 160 Malaysian 5 5 Danish 15 15 American 5 5 Swedish 24 24 Bulgarian 1043 1043
Expected output:
languages students Percentage  French 734 24,46 Afrikaans 93 3,12 Greek 117 3,9 German 55 1,83 Armenian 160 5,33 Malaysian 5 0,16 Danish 15 0,5 American 5 0,16 Swedish 24 0,8 Bulgarian 1043 34,76
How can I calculate the final total sum as a value to calculate the percentages

Time Dilation (slowing) based on 0 to 1, and the amount it influences
Hopefully I can explain this. So I'm making a game, and I want to control the passage of time for the different objects in the world. I want a global dilation that I can pause the game (all objects and their timers) with. I want a dilation that is room specific that some objects are affected by, but others may not be. I also want ones specific to the individual entities themselves. This was what I was thinking:
List<Tuple<double,double>> DilationAffectors = stuff;
This list would be all the Dilations affecting the object, and the impact that they have over the object (0 being no effect, .5 means halfway affected, 1 is full effect). The following logic is kinda how I would apply it to the passage of time
var elapsedTime = 0.0; Stopwatch stopwatch = new Stopwatch(); stopwatch.start(); While(true) { //The function in question double dilation = getDilation(DilationAffectors); var timeBetween = stopwatch.Elapsed  LastTick; elapsedTime += TimeSpan.FromTicks((long)(timeBetween.Ticks * dilation)); LastTick = stopwatch.Elapsed; //... }
So I've tried a number of variations on the getDilation function, but I can't seem to get the math right.
 If the room is a speed of 1.0 it shouldn't affect the object's speed
 If the Global is a speed of 0, time should not pass at all
 If the object specific dilation is .5, it should move at half speed. (unless affected by room speed, then it's half of the room speed)
 etc. etc. for other conditions
I can't seem to get the different dilations to play together. If one has a dilation of 0, but affects the object by 0, it still affects it in certain ways. I tried averaging them, but this wasn't the desired effect. I can't seem to find the solution to this math equation. Am I overcomplicating this? Or thinking about it wrong?

Dask Not Parallelizing Function
I am trying to use Dask to parallelize a function computation over a Data Frame object. The function takes in inputs from one Data Frame and filters based on those inputs on the other Data Frame and returns the output. This is a O(N^2) computation and there is no way to get around it. However, it can be parallelized  simple multiprocessing gives speed ups. However, I am not getting any speed ups using Dask. Here is what my computation looks like.
output_dask = train_dask_df.map_partitions( lambda df: df.apply(lambda x: my_function(x['id'], x['date'], cache_pandas_df, axis=1), ).compute()
I've tried out all schedulers. It can't be vectorized. I have tried Numba acceleration and that gives marginal returns. I know parallelization should work here because I can get somewhat linear speedups with
multiprocessing
. I would love any advice to speed up and parallelize this sort of funciton. 
pandas groupby got KeyError
I am using pandas to calculate some stats of a data file and got some error. It's reproducible by this simple sample code:
import pandas as pd df = pd.DataFrame({'A': [1,2,3,4,5,6,7,8,9], 'B': [1,2,3,1,2,3,1,2,3], 'C': ['a', 'b', 'a', 'b', 'a', 'b', 'a','a', 'b']}) def testFun2(x): return pd.DataFrame({'xlen': x.shape[0]}) def testFun(x): b = x['B'] print "b equals to {}".format(b) # This line prints okay c = x['C'] out = pd.DataFrame() for a in x['A'].unique(): subx = x[x.A == a] subxg = testFun2(subx) out = pd.concat([out, subxg]) return out df.groupby(['B', 'C']).apply(lambda x: testFun(x))
The whole error output look like this:
 KeyError Traceback (most recent call last) <ipythoninput21979d23aa904c> in <module>() 18 return out 19 > 20 df.groupby(['B', 'C']).apply(lambda x: testFun(x)) C:\Users\Administrator\Anaconda2\lib\sitepackages\pandas\core\groupby\groupby.pyc in apply(self, func, *args, **kwargs) 928 929 with _group_selection_context(self): > 930 return self._python_apply_general(f) 931 932 return result C:\Users\Administrator\Anaconda2\lib\sitepackages\pandas\core\groupby\groupby.pyc in _python_apply_general(self, f) 934 def _python_apply_general(self, f): 935 keys, values, mutated = self.grouper.apply(f, self._selected_obj, > 936 self.axis) 937 938 return self._wrap_applied_output( C:\Users\Administrator\Anaconda2\lib\sitepackages\pandas\core\groupby\groupby.pyc in apply(self, f, data, axis) 2271 # group might be modified 2272 group_axes = _get_axes(group) > 2273 res = f(group) 2274 if not _is_indexed_like(res, group_axes): 2275 mutated = True <ipythoninput21979d23aa904c> in <lambda>(x) 18 return out 19 > 20 df.groupby(['B', 'C']).apply(lambda x: testFun(x)) <ipythoninput21979d23aa904c> in testFun(x) 9 10 def testFun(x): > 11 b = x['B'] 12 c = x['C'] 13 out = pd.DataFrame() C:\Users\Administrator\Anaconda2\lib\sitepackages\pandas\core\frame.pyc in __getitem__(self, key) 2686 return self._getitem_multilevel(key) 2687 else: > 2688 return self._getitem_column(key) 2689 2690 def _getitem_column(self, key): C:\Users\Administrator\Anaconda2\lib\sitepackages\pandas\core\frame.pyc in _getitem_column(self, key) 2693 # get column 2694 if self.columns.is_unique: > 2695 return self._get_item_cache(key) 2696 2697 # duplicate columns & possible reduce dimensionality C:\Users\Administrator\Anaconda2\lib\sitepackages\pandas\core\generic.pyc in _get_item_cache(self, item) 2487 res = cache.get(item) 2488 if res is None: > 2489 values = self._data.get(item) 2490 res = self._box_item_values(item, values) 2491 cache[item] = res C:\Users\Administrator\Anaconda2\lib\sitepackages\pandas\core\internals.pyc in get(self, item, fastpath) 4113 4114 if not isna(item): > 4115 loc = self.items.get_loc(item) 4116 else: 4117 indexer = np.arange(len(self.items))[isna(self.items)] C:\Users\Administrator\Anaconda2\lib\sitepackages\pandas\core\indexes\base.pyc in get_loc(self, key, method, tolerance) 3078 return self._engine.get_loc(key) 3079 except KeyError: > 3080 return self._engine.get_loc(self._maybe_cast_indexer(key)) 3081 3082 indexer = self.get_indexer([key], method=method, tolerance=tolerance) pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() KeyError: 'B'
However, I found that if the
testFun2
is changed to something simpler, like:def testFun2(x): return 1
then the error won't occur. This is very confusing to me  the
testFun2
has nothing to do with the lineb = x['B']
, right? Why did I get the error in the first place? Thanks! 
An algorithm for mapping one categorical distribution onto another?
I ran into what I thought was a simple problem the other day: given a population of entities (companies, in this case), I want to project the distribution of one category onto another. For example, here are two categories:
60% 40% A1A2 B1B2 40% 60%
In the above, category A represents 'number of employees' and category B represents 'annual sales'. Each category is a different way of slicing/bucketing the same population, so the total number of companies in 'A' categories equals the total in all 'B' categories.
Because category A1 is greater than category B1, my thinking is to get a weighted average, something like: "take 2/3 (40%/60%) of B1 and 1/3 (the remainder) of B2". In the above example, if B1 is a revenue of $1,500 and B2 of $3,000, then the average for category A1 would be $2,000 (i.e. (2/3)*$1,500 + (1/3)*$3,000). And for category A2, the revenue would simply be $3,000.
Darned if I can figure out how to code this in a way that will scale to 'n' categories and cover all situations, though. The approach I ended up using was to inflate each dataframe by the 'counts' (a solution I found here, btw), join using the rowindex, and then take the average there. I know this boils down to a weightedaverage problem, but I couldn't sort it out.
Here's my example Python/Pandas code:
import pandas as pd d_A = {'Cat_A_Name': ['A1', 'A2'], 'Cat_A_Empl': [2, 5], 'Cat_A_Counts': [60, 40]} d_B = {'Cat_B_Name': ['B1', 'B2'], 'Cat_B_Rev': [1500, 3000], 'Cat_B_Counts': [40, 60]} df_A = pd.DataFrame(data=d_A) df_B = pd.DataFrame(data=d_B) # For each dataframe, do a rowexpansion by count (and then drop that count column) df_A = df_A.loc[df_A.index.repeat(df_A['Cat_A_Counts'].astype('int'))].reset_index(drop=True) df_A.drop({'Cat_A_Counts'}, axis=1, inplace=True) df_B = df_B.loc[df_B.index.repeat(df_B['Cat_B_Counts'].astype('int'))].reset_index(drop=True) df_B.drop({'Cat_B_Counts'}, axis=1, inplace=True) # Then join by index, take the average by Cat_A_Name, and drop duplicate rows df_All = df_A.join(df_B) df_All['Avg_Cat_B_Rev'] = df_All.groupby('Cat_A_Name')['Cat_B_Rev'].transform(pd.Series.mean) df_All.drop_duplicates(subset='Avg_Cat_B_Rev', inplace=True) print(df_A.head()) print(df_B.head()) print(df_All)
Is there a more elegant solution to this problem? Ideally I would be able to use a weighted average that took into account the proper category.

Graphing a MultiLevel Index Dataframe with Pandas/Seaborn
I have a project where I am supposed to recreate the graph below (made in Excel) in Python. Col1 & Col2 share the left Yaxis and Col3 is on a seperate Yaxis on the right. I have been given a multiindexed dataframe containing data about three frequencies tested over four days which I will provide below.
The dataframe below is shortened to only include days, and not days+hours like in the graph above. I just didn't include the full dataframe because that would've been 141 rows.
d = {'col1':[10.965867092294166,11.740605446907654,11.874630095282816,12.293252661642788,1.0550815232087125,1.2600652799903598,1.2287602539013704,1.5049839034547996,1.5017439425026908,1.857491967675137,2.004142472879474,2.238680621735559],'col2':[1.2,1.1590909090909092,1.0666666666666667,1.0294117647058822,8.48,7.454545454545454,8.483333333333333,6.588235294117647,3.72,3.0681818181818183,3.066666666666667,2.6470588235294117],'col3':[30.98667,30.40908977272727,30.049998333333335,30.352949999999996,87.36,89.5303,91.63333000000002,90.45097647058823,82.746664,79.47727727272728,78.97222166666667,78.09804705882352]} idx = pd.MultiIndex.from_product([['Freq1', 'Freq2', 'Freq3'], [21, 22, 23, 24]], names=['FrequencyNumber', 'DayofMonth']) df = pd.DataFrame(d, idx) In[]: df Out[]: col1 col2 col3 FrequencyNumber DayofMonth Freq1 21 10.965867 1.200000 30.986670 22 11.740605 1.159091 30.409090 23 11.874630 1.066667 30.049998 24 12.293253 1.029412 30.352950 Freq2 21 1.055082 8.480000 87.360000 22 1.260065 7.454545 89.530300 23 1.228760 8.483333 91.633330 24 1.504984 6.588235 90.450976 Freq3 21 1.501744 3.720000 82.746664 22 1.857492 3.068182 79.477277 23 2.004142 3.066667 78.972222 24 2.238681 2.647059 78.098047
Here is what I've done to attempt to graph it.
df.col1.plot(color = 'blue') df.col2.plot(kind='bar', color = 'orange') df.col3.plot(color= 'grey')
and I get the following result: This doesn't look great because its just the averages over days instead of hours but that isn't my problem with it. I would really like to have a similar looking Xaxis format as the Excel graph so the graph is easy to interpret.
I've tried looking for answers on here related to my topic and some mentioned stuff about unpacking the dataframe and I couldn't get that to work. Thank you very much for your help.
PS: I am unable to get my format for matplotlib to look like a Seaborn graph (See picture below). All the graphs I've made so far (Heatmaps and Scatterplots) I've done using Seaborn so I'd really like the color themes of my graphs to match when I finish.

Two programs using leveldb
i am wondering what is efficient way of using leveldb by two diffrent programs. Unfortunately leveldb can be used only by one process. One of solutions I have tried was using multilevel module https://github.com/juliangruber/multilevel. But time results are really bad:
one program without multilevel: 2399 milis one program with multilevel: 7482 milis two programs with multilevel: 13202 milis
Do you know any other solutions?

Multilevel push menu is too heavy for browser when resizing window
when I made a multilevel menu for mobile phones, I found that having several levels behind the horizon y coordinates with the width of each level 100% is quite demanding for the browser when changing the width of the window.
I do not want to use any plugin, I just want to have my simple code, so do you have any advice on how to predict each level hidden behind the horizon before it emits to prevent the browser from changing the window width in the meantime? Because each level changes its width due to its width: 100%.
Levels have also transition, in another class.
So basicly every level have this setup:
#navmenu, #navmenu .navlevel { zindex: 1000; position: fixed; top: 0; left: 0; right: 0; bottom: 0; width: 100%; height: 100%; webkittransform: translate3d(100%,0,0); moztransform: translate3d(100%,0,0); transform: translate3d(100%,0,0); }
Thank you