Cannot download the whole source code for a web page

I am trying to get a list of hyperlinks in the following webpage

When I inspect the source code in Chrome I can find those links, which are pdf files (I just do a CTRL+F pdf and I find them), for example:

However, anyway I try to get the source code in R, to get all the pdf link addresses, I never get the whole source code.

I have tried rvest, RCurl (switching peer verification on) and selenium and none of them get the whole code.

Every time I get the html code (using read_html, getURL, or getPageSource()) and try to find any instance of a string 'pdf' I get nothing.

unlist(str_extract_all(getURL(''), 'pdf'))


Anyone knows what I can do to get the whole code?