How to process while scrapy crawling data from an url which one's data controlled by another url

Recently I'm researching crawling,and I choose dataurl,I can easily get the data with scrapy.But it is always responsing English data.

For getting Chinese data,I found what dataurl responses depends on what country language of urlControllLangContentByServerSide is with parameter &plang=1. I even append &plang=3 or formdata {plang:3}for dataurl,but that doesn't work

In short,urlControllLangContentByServerSide should be visited first ,if i want to get the Chinese data of dataurl,this has been proved by many tests on postman,and I don't know how to deal with this in code.

Thank you for your time for reading and thinking.

 def start_requests(self):
     urlControllLangContentByServerSide='http://messefrankfurt.kenti-creative.com/index.php?moduleId=129&pageName=list2&pId=14&plang=3'
     dataurl='http://messefrankfurt.kenti-creative.com/modules/exhibitor/ajax/more2.php?moduleId=129&pageName=list2&pId=14&yId=0&hId=0&uId=-2&cId=undefined&aId=-1&fId=0&plang=3'
    # I even append &plang=3 for dataurl,But that doesn't work 
     for  s in range(5):
         time.sleep(.5)  #im trying to visit this url many times to tell server what  
         #language should be used!  maybe that server uses session to controll language data.
         yield scrapy.Request(urlControllLangContentByServerSide,callback=self.parse_m,method='POST')

     for  i in range(5):
         form_data={"page":"%s" % i}
         self.current_index=i
         yield scrapy.FormRequest(url, callback=self.parse,
                             method='POST', formdata=form_data)
     print(self.wrongs)
 def  parse_m(self,response):
     with open('mother%s.html'% random.randint(3,90) ,'wb') as f:
         f.write(response.body)