Pyspark, when trying to join two RDD, I got a UnicodeEncode error

UnicodeEncodeError: 'ascii' codec can't encode character u'\xa9' in position 261: ordinal not in range(128)

Above is what I got after I've made a new table. Actually I used this command following ; table3 = u' '.join((table1, table2)).encode('utf-8').strip() But it did not work, I will put my code and really output for each RDD.

The code to create first RDD

table1=sc.textFile('inventory').map(lambda line:next(csv.reader([line]))).map(lambda fields:((fields[0],fields[8],fields[10]),1))

First RDD real output

[(('BibNum', 'ItemCollection', 'ItemLocation'), 1),
(('3011076', 'ncrdr', 'qna'), 1),
 (('2248846', 'nycomic', 'lcy'), 1)]

The code to create second RDD

table2=sc.textFile('checkouts').map(lambda line:next(csv.reader([line]))).map(lambda fields:((fields[0],fields[3],fields[5]),1))

Second RDD real output

[(('BibNum', 'ItemCollection', 'CheckoutDateTime'), 1),
(('1842225', 'namys', '05/23/2005 03:20:00 PM'), 1), 
(('1928264', 'ncpic', '12/14/2005 05:56:00 PM'), 1),
(('1982511', 'ncvidnf', '08/11/2005 01:52:00 PM'), 1),
(('2026467', 'nacd', '10/19/2005 07:47:00 PM'), 1)]

And lastly, I tried following code table3 = u' '.join((table1, table2)).encode('utf-8').strip(), to join table1 and table2. But it did not work. Please enlighten me if you have any idea for this error.