Well, i want to create a script who filter a web page and return me a value. For example... a sha finguerprint or software version.
if i see a simple web page, we can see tables, quotes, code sections... and any id.
curl can do it or maybe i need to use another tool like grep(or both)?
submitted by /u/alohl669 [link] [comments]
Read more here: https://www.reddit.com/r/linux/comments/n45ylc/can_i_filter_html_code_using_curl/
This content was originally published by /u/alohl669 at Linux, GNU/Linux, free software..., and is syndicated here via their RSS feed. You can read the original post over there.