First, "manually" use string matching and regular expression to extract information from downloaded HTML.
Second, use JTidy to transform HTML to XHTML, and then use XQuery (e. g., Saxon, ...) over XHTML to extract required information.
Third, which is what I prefer:
- Create a TagSoup HTML parser, which provides an SAX interface;
- Use XOM to build a DOM from HTML using the TagSoup SAX parser;
- Use the built-in XPath query facility inside XOM (i.e., Jaxen) to parse the XOM DOM document.
// Create a TagSoup SAX parser.
XMLReader parser = new org.ccil.cowan.tagsoup.Parser();
// Use the TagSoup parser to build an XOM document from HTML.
Document doc = new Builder(parser).build(new File("index.html"));
// Do some XPath query: find all "table" elements.
Nodes nodes = doc.query("//*[local-name()='table']");
1 comment:
Hello,
Thanks for the post.
Could you please point me to more detailed example…
I’m trying to parse an html table, using “your way” with no much success
And it seems that if I use any other quarey besides your example (//*[local-name()='table'])
I get nothing.
Thanks
Post a Comment