Python Scrapy yielding sequentially

Hi I want to yield from two different web pages using the same crawler and put the items from both pages in the same pipeline. However I keep getting the all the results from the first page followed by all the results from the second page.

Example: Page A contains a list of links. I am scraping Item 1 and 2 for each link in page A and item 3 from Page B. Each link in Page A points to a respective child page B.

I want the data storage format to be 1st.B.3 1st.A.1 1st.A.2, 2nd.B.3 2nd.A.1 2nd.A.2, 1st.B.3 translates to the item 3 of Page B of the first link in Page A.

But what I am getting is 1st.A.1, 1st.A.2, 2nd.A.1, 2nd.A.2, 1st.B.3, 2nd.B.3. Any advice would help, thanks!

The entire code:

import scrapy

class PESpider(scrapy.Spider):
   name = 'Wiki_PE'
   start_urls =['']

   def Website(self,response):
       count = 1
       for row in response.css('table.infobox.vcard tr'):
       count += 1
        if row.css(':nth-child('+str(count)+') th::text').extract_first() == 'Website':
            private_site = row.css(':nth-child('+str(count)+') a::attr(href)').extract_first()
            print private_site
            print 'exiting website'

            return {'Website': private_site}

   def parse(self,response):
        count = 0
        for row in response.css('div.div-col.columns.column-width ul li'):
            count += 1
            print count
            name = row.css(':nth_child('+ str(count) +') a::attr(title)').extract()[1]
            country = row.css(':nth_child('+ str(count) +') a::attr(href)').extract()[0][6:]
            link = row.css(':nth_child('+ str(count) +') a::attr(href)').extract()[1]
            website = '' + link
            yield scrapy.Request(website,callback = self.Website,priority=1)

                'Name'    : name,
                'Country' : country,