Cannot download the whole source code for a web page

I am trying to get a list of hyperlinks in the following webpage

http://www.oabmg.org.br/examedeordem/home/index

When I inspect the source code in Chrome I can find those links, which are pdf files (I just do a CTRL+F pdf and I find them), for example: http://www.oabmg.org.br/areas/examedeordem/doc//2017.2%20(XXII%20EOU).pdf

However, anyway I try to get the source code in R, to get all the pdf link addresses, I never get the whole source code.

I have tried rvest, RCurl (switching peer verification on) and selenium and none of them get the whole code.

Every time I get the html code (using read_html, getURL, or getPageSource()) and try to find any instance of a string 'pdf' I get nothing.

library(rvest)
library(stringr)
unlist(str_extract_all(getURL('http://www.oabmg.org.br/examedeordem/home/index'), 'pdf'))

character(0)

Anyone knows what I can do to get the whole code?