Moi
Tein Pythonilla ja Scrapy kirjastolla(?) palikan nopeaa web scrapingiä varten (ei ollutkaan niin nopeaa, ja käsin olisi varmaan jo rappailtu).
Scrapy kaatuu virheeseen: DNS lookup failed. Itseä hämää '
www.domain' ja koen, että sen tulisi olla '
www.domain.tld'?
Tuskin tämä liittyy SELinuxiin?
Tiedosto(t) on sijoitettu /var/www/py/scraper/scraper/[lp.py]
Ääni...domain muutettu
Ja on testattu, että toimii.
# scrapy runspider lp.py -o test.json
2016-01-23 00:21:24 [scrapy] INFO: Scrapy 1.0.4 started (bot: lpspider)
2016-01-23 00:21:24 [scrapy] INFO: Optional features available: ssl, http11
2016-01-23 00:21:24 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'lpspider.spiders', 'FEED_FORMAT': 'json', 'SPIDER_MODULES': ['lpspider.spiders'], 'FEED_URI': 'test.json', 'BOT_NAME': 'lpspider'}
2016-01-23 00:21:24 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter, TelnetConsole, LogStats, CoreStats, SpiderState
2016-01-23 00:21:24 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-01-23 00:21:24 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-01-23 00:21:24 [scrapy] INFO: Enabled item pipelines:
2016-01-23 00:21:24 [scrapy] INFO: Spider opened
2016-01-23 00:21:24 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-01-23 00:21:24 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-01-23 00:21:24 [scrapy] DEBUG: Retrying <GET http://www.domain.tld> (failed 1 times): DNS lookup failed: address 'www.domain' not found: [Errno -2] Name or service not known.
2016-01-23 00:21:24 [scrapy] DEBUG: Retrying <GET http://www.domain.tld> (failed 2 times): DNS lookup failed: address 'www.domain' not found: [Errno -2] Name or service not known.
2016-01-23 00:21:24 [scrapy] DEBUG: Gave up retrying <GET http://www.domain.tld> (failed 3 times): DNS lookup failed: address 'www.domain' not found: [Errno -2] Name or service not known.
2016-01-23 00:21:24 [scrapy] ERROR: Error downloading <GET http://www.domain.tld>: DNS lookup failed: address 'www.domain' not found: [Errno -2] Name or service not known.
2016-01-23 00:21:24 [scrapy] INFO: Closing spider (finished)
2016-01-23 00:21:24 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 3,
'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 3,
'downloader/request_bytes': 687,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 1, 22, 22, 21, 24, 733750),
'log_count/DEBUG': 4,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2016, 1, 22, 22, 21, 24, 512501)}
2016-01-23 00:21:24 [scrapy] INFO: Spider closed (finished)