9

I have looked around and only found solutions that render a URL to HTML. However I need a way to be able to render a webpage (That I already have, and that has JavaScript) to proper HTML.

Want: Webpage (with JavaScript) ---> HTML

Not: URL --> Webpage (with JavaScript) ---> HTML

I couldn't figure out how to make the other code work the way I wanted.

This is the code I was using that renders URLs: http://webscraping.com/blog/Scraping-JavaScript-webpages-with-webkit/

For clarity, the code above takes a URL of a webpage that has some parts of the page rendered by JavaScript, so if I scrape the page normally using say urllib2 then I won't get all the links etc that are rendered as after the JavaScript.

However I want to be able to scrape a page, say again with urllib2, and then render that page and get the outcome HTML. (Different to the above code since it takes a URL as it's argument.

Any help is appreciated, thanks guys :)

13
  • I find what you want unclear. Perhaps you can give an example of what you mean by "render a webpage to proper HTML". Do you want the actual DOM? Do you want the textual HTML? Rendering can be done when you "feed the webpage into a browser" (i.e., open this text file with a browser), so it's not clear what else you want to achieve that is not already done by the browser.CommentedApr 2, 2015 at 4:20
  • Now that you've made it clearer - I would go with Selenium Web Driver. Have you considered that? If you give a more concrete example of your urllib2 code, then I might be able to refer to it with a corresponding Selenium code.CommentedApr 2, 2015 at 4:36
  • Now it's completely unclear what it is that you want: "I want this part but in a way like the first example" - But the first example doesn't do any of that. It just says in a comment "I want to render text and get the pure HTML". So do you want to render the URL or not??? What difference does it make if you first fetch the data from the URL into a file using urllib2? In either case you have to send an HTTP request at some point. You can take the text file and feed it into Selenium (or any other scraping utility), but it's not going to be any different than using the URL directly.CommentedApr 2, 2015 at 4:56
  • The URL is protected by cloudflare and I don't know how to fetch the bypassed url because it gives me the cloud flare block page if I fetch the URL directly. I have a way to get the bypassed HTML howeverCommentedApr 2, 2015 at 5:08
  • So you can fetch it only withurllib2? How is that possible???CommentedApr 2, 2015 at 5:16

3 Answers 3

13

You can pip install selenium from a command line, and then run something like:

from selenium import webdriver from urllib2 import urlopen url = 'http://www.google.com' file_name = 'C:/Users/Desktop/test.txt' conn = urlopen(url) data = conn.read() conn.close() file = open(file_name,'wt') file.write(data) file.close() browser = webdriver.Firefox() browser.get('file:///'+file_name) html = browser.page_source browser.quit() 
7
  • I hit another problem however, is there somewhere more convenient I could ask you about it?CommentedApr 2, 2015 at 7:25
  • @user3928006: Post it in another question. You'll be asking not just me, but the entire community (so you'll have better chances of getting a good answer). You can link it in a comment to this question if you my specific attention to it at that point.CommentedApr 2, 2015 at 7:27
  • It's quite relevant to this question, something in the rendered page isn't rendering how I would expect, I'll update this question with my edited version of your codeCommentedApr 2, 2015 at 7:29
  • 2
    @user3928006: No, don't do it this way, it will make the answer obsolete and partially irrelevant. This is not how things are usually done here. If your new problem is related to this question (or to the answer), then link it within the new question that you post.CommentedApr 2, 2015 at 7:31
  • 1
    Not so simple. This requires having both the Firefox browser and the geckodriver installed.
    – Leonid
    CommentedJun 10, 2020 at 16:56
4

The module I use for doing so is request_html. The first time used it automatically downloads a chromium browser, then you can render any webpage(with JavaScript)

requests_html also supports html parsing.

basically an alternative for selenium

example:

from requests_html import HTMLSession session = HTMLSession() r = session.get(URL) r.html.render() # you can use r.html.render(sleep=1) if you want 
2
  • 1
    +1 for r.html.render(sleep=1) which resolved a problem I was having for 3 days without success to find solution.CommentedMar 14, 2022 at 15:06
  • This doesn't work for me. The library is completely messed up and session.get does not return an HTMLResponse anymore. It's completely uselessCommentedSep 24, 2024 at 22:27
-1

try webdriver.Firefox().get('url')

    Start asking to get answers

    Find the answer to your question by asking.

    Ask question

    Explore related questions

    See similar questions with these tags.