2019年4月17日File "E:/PythonProject/PaChong/first.py", line 15, in <module> rp.parse((urlopen('http://www.jianshu.com/robots.txt').read().decode('utf-8').split('\n'))) File "E:\Python\lib\urllib\request.py", line 222, in ur
If Googlebot finds a robots.txt file for a site, it will usually abide by the suggestions and proceed to crawl the site. If Googlebot encounters an error while trying to access a site’s robots.txt file and can't determine if one exists or not, it won't crawl the site. ...