Fork me on GitHub

Scrapy源码(2)——爬虫开始的地方

这是崔斯特的第三十三篇原创文章

开始学习Scrapy源码 (๑• . •๑)

Scrapy运行命令

一般来说,运行Scrapy项目的写法有,(这里不考虑从脚本运行Scrapy)

1
2
3
4
5
6
7
Usage examples:
$ scrapy crawl myspider
[ ... myspider starts crawling ... ]
$ scrapy runspider myspider.py
[ ... spider starts crawling ... ]

但是更好的写法是,新建一个Python文件,如下,(便于调试)

1
2
3
from scrapy import cmdline
cmdline.execute("scrapy crawl myspider".split())

很容易就发现,Scrapy运行文件是cmdline.py文件里面的execute()函数,下面学习下这个函数在做什么。

分析源码

1
2
3
4
5
6
7
8
9
10
def execute(argv=None, settings=None):
if argv is None:
argv = sys.argv
# --- backwards compatibility for scrapy.conf.settings singleton ---
if settings is None and 'scrapy.conf' in sys.modules:
from scrapy import conf
if hasattr(conf, 'settings'):
settings = conf.settings
# ------------------------------------------------------------------

寻找 scrapy.conf配置文件,argv直接取sys.argv

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
if settings is None:
settings = get_project_settings()
# set EDITOR from environment if available
try:
editor = os.environ['EDITOR']
except KeyError: pass
else:
settings['EDITOR'] = editor
check_deprecated_settings(settings)
# --- backwards compatibility for scrapy.conf.settings singleton ---
import warnings
from scrapy.exceptions import ScrapyDeprecationWarning
with warnings.catch_warnings():
warnings.simplefilter("ignore", ScrapyDeprecationWarning)
from scrapy import conf
conf.settings = settings
# ------------------------------------------------------------------

set EDITOR from environment if available

读取settings设置文件,导入项目,调用get_project_settings()函数,此处为utils文件夹下的project.py文件:

1
2
3
4
def get_project_settings():
if ENVVAR not in os.environ:
project = os.environ.get('SCRAPY_PROJECT', 'default')
init_env(project)

project.py

init_env() 函数如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
def init_env(project='default', set_syspath=True):
"""Initialize environment to use command-line tool from inside a project
dir. This sets the Scrapy settings module and modifies the Python path to
be able to locate the project module.
"""
cfg = get_config()
if cfg.has_option('settings', project):
os.environ['SCRAPY_SETTINGS_MODULE'] = cfg.get('settings', project)
closest = closest_scrapy_cfg()
if closest:
projdir = os.path.dirname(closest)
if set_syspath and projdir not in sys.path:
sys.path.append(projdir)

conf.py

如注释所说,初始化环境,循环递归找到用户项目中的配置文件settings.py,并且将其设置到环境变量Scrapy settings module中。然后修改Python路径,确保能找到项目模块。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
settings = Settings()
settings_module_path = os.environ.get(ENVVAR)
if settings_module_path:
settings.setmodule(settings_module_path, priority='project')
# XXX: remove this hack
pickled_settings = os.environ.get("SCRAPY_PICKLED_SETTINGS_TO_OVERRIDE")
if pickled_settings:
settings.setdict(pickle.loads(pickled_settings), priority='project')
# XXX: deprecate and remove this functionality
env_overrides = {k[7:]: v for k, v in os.environ.items() if
k.startswith('SCRAPY_')}
if env_overrides:
settings.setdict(env_overrides, priority='project')
return settings

project.py

至此,get_project_settings()该函数结束,如函数名字一样,最后返回项目配置,到此为止,接着往下看

1
2
3
4
5
6
7
8
9
10
11
inproject = inside_project()
cmds = _get_commands_dict(settings, inproject)
cmdname = _pop_command_name(argv)
parser = optparse.OptionParser(formatter=optparse.TitledHelpFormatter(), \
conflict_handler='resolve')
if not cmdname:
_print_commands(settings, inproject)
sys.exit(0)
elif cmdname not in cmds:
_print_unknown_command(settings, cmdname, inproject)
sys.exit(2)

导入相应的module爬虫模块(inside_project)

执行环境是否在项目中,主要检查scrapy.cfg配置文件是否存在,读取commands文件夹,把所有的命令类转换为{cmd_name: cmd_instance}的字典

1
2
3
4
5
6
7
8
cmd = cmds[cmdname]
parser.usage = "scrapy %s %s" % (cmdname, cmd.syntax())
parser.description = cmd.long_desc()
settings.setdict(cmd.default_settings, priority='command')
cmd.settings = settings
cmd.add_options(parser)
opts, args = parser.parse_args(args=argv[1:])
_run_print_help(parser, cmd.process_options, args, opts)

根据命令名称找到对应的命令实例,设置项目配置和级别为command,添加解析规则,解析命令参数,并交由Scrapy命令实例处理。

最后,看看下面这段代码。

1
2
3
cmd.crawler_process = CrawlerProcess(settings)
_run_print_help(parser, _run_command, cmd, args, opts)
sys.exit(cmd.exitcode)

初始化CrawlerProcess实例,将对应的命令执行,这里是crawl

1
2
3
4
5
def _run_command(cmd, args, opts):
if opts.profile:
_run_command_profiled(cmd, args, opts)
else:
cmd.run(args, opts)

看到这,想起了文档中的介绍 Run Scrapy from a script

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Here’s an example showing how to run a single spider with it.
import scrapy
from scrapy.crawler import CrawlerProcess
class MySpider(scrapy.Spider):
# Your spider definition
...
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})
process.crawl(MySpider)
process.start() # the script will block here until the crawling is finished

所以Scrapy爬虫运行都有用使用到CrawlerProcess,想要深入了解可以去看看源码 scrapy/scrapy/crawler.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
"""
A class to run multiple scrapy crawlers in a process simultaneously.
This class extends :class:`~scrapy.crawler.CrawlerRunner` by adding support
for starting a Twisted `reactor`_ and handling shutdown signals, like the
keyboard interrupt command Ctrl-C. It also configures top-level logging.
This utility should be a better fit than
:class:`~scrapy.crawler.CrawlerRunner` if you aren't running another
Twisted `reactor`_ within your application.
The CrawlerProcess object must be instantiated with a
:class:`~scrapy.settings.Settings` object.
:param install_root_handler: whether to install root logging handler
(default: True)
This class shouldn't be needed (since Scrapy is responsible of using it
accordingly) unless writing scripts that manually handle the crawling
process. See :ref:`run-from-script` for an example.
"""

最后,附上Scrapy的路径图

总结

简单来说,有这么几步:

  1. 读取配置文件,应用到爬虫中
  2. 把所有的命令类转换名称与实例字典
  3. 初始化CrawlerProcess实例,运行爬虫

(看的头疼,好多函数名记不住)

回顾:

  1. Scrapy源码(1)——爬虫流程概览
  2. Scrapy源码(2)——爬虫开始的地方