Highlights:
- Security bug fixes.
- Support for Twisted >= 23.8.0.
- Documentation improvements.
Addressed ReDoS vulnerabilities:
scrapy.utils.iterators.xmliter
is now deprecated in favor of~scrapy.utils.iterators.xmliter_lxml
, which~scrapy.spiders.XMLFeedSpider
now uses.To minimize the impact of this change on existing code,
~scrapy.utils.iterators.xmliter_lxml
now supports indicating the node namespace with a prefix in the node name, and big files with highly nested trees when using libxml2 2.7+.- Fixed regular expressions in the implementation of the
~scrapy.utils.response.open_in_browser
function.
Please, see the cc65-xxvf-f7r9 security advisory for more information.
DOWNLOAD_MAXSIZE
andDOWNLOAD_WARNSIZE
now also apply to the decompressed response body. Please, see the 7j7m-v7m3-jqm7 security advisory for more information.- Also in relation with the 7j7m-v7m3-jqm7 security advisory, the deprecated
scrapy.downloadermiddlewares.decompression
module has been removed. - The
Authorization
header is now dropped on redirects to a different domain. Please, see the cw9j-q3vf-hrrv security advisory for more information.
- The Twisted dependency is no longer restricted to < 23.8.0. (
6024
,6064
,6142
)
- The OS signal handling code was refactored to no longer use private Twisted functions. (
6024
,6064
,6112
)
- Improved documentation for
~scrapy.crawler.Crawler
initialization changes made in the 2.11.0 release. (6057
,6147
) - Extended documentation for
Request.meta <scrapy.http.Request.meta>
. (5565
) - Fixed the
dont_merge_cookies
documentation. (5936
,6077
) - Added a link to Zyte's export guides to the
feed exports <topics-feed-exports>
documentation. (6183
) - Added a missing note about backward-incompatible changes in
~scrapy.exporters.PythonItemExporter
to the 2.11.0 release notes. (6060
,6081
) - Added a missing note about removing the deprecated
scrapy.utils.boto.is_botocore()
function to the 2.8.0 release notes. (6056
,6061
) - Other documentation improvements. (
6128
,6144
,6163
,6190
,6192
)
- Added Python 3.12 to the CI configuration, re-enabled tests that were disabled when the pre-release support was added. (
5985
,6083
,6098
) - Fixed a test issue on PyPy 7.3.14. (
6204
,6205
)
Highlights:
- Spiders can now modify
settings <topics-settings>
in their~scrapy.Spider.from_crawler
methods, e.g. based onspider arguments <spiderargs>
. - Periodic logging of stats.
- Most of the initialization of
scrapy.crawler.Crawler
instances is now done in~scrapy.crawler.Crawler.crawl
, so the state of instances before that method is called is now different compared to older Scrapy versions. We do not recommend using the~scrapy.crawler.Crawler
instances before~scrapy.crawler.Crawler.crawl
is called. (6038
) scrapy.Spider.from_crawler
is now called before the initialization of various components previously initialized inscrapy.crawler.Crawler.__init__
and before the settings are finalized and frozen. This change was needed to allow changing the settings inscrapy.Spider.from_crawler
. If you want to access the final setting values and the initialized~scrapy.crawler.Crawler
attributes in the spider code as early as possible you can do this in~scrapy.Spider.start_requests
or in a handler of theengine_started
signal. (6038
)- The
TextResponse.json <scrapy.http.TextResponse.json>
method now requires the response to be in a valid JSON encoding (UTF-8, UTF-16, or UTF-32). If you need to deal with JSON documents in an invalid encoding, usejson.loads(response.text)
instead. (6016
) ~scrapy.exporters.PythonItemExporter
used the binary output by default but it no longer does. (6006
,6007
)
Removed the binary export mode of
~scrapy.exporters.PythonItemExporter
, deprecated in Scrapy 1.1.0. (6006
,6007
)Note
If you are using this Scrapy version on Scrapy Cloud with a stack that includes an older Scrapy version and get a "TypeError: Unexpected options: binary" error, you may need to add
scrapinghub-entrypoint-scrapy >= 0.14.1
to your project requirements or switch to a stack that includes Scrapy 2.11.- Removed the
CrawlerRunner.spiders
attribute, deprecated in Scrapy 1.0.0, useCrawlerRunner.spider_loader <scrapy.crawler.CrawlerRunner.spider_loader>
instead. (6010
) - The
scrapy.utils.response.response_httprepr
function, deprecated in Scrapy 2.6.0, has now been removed. (6111
)
- Running
~scrapy.crawler.Crawler.crawl
more than once on the samescrapy.crawler.Crawler
instance is now deprecated. (1587
,6040
)
- Spiders can now modify settings in their
~scrapy.Spider.from_crawler
method, e.g. based onspider arguments <spiderargs>
. (1305
,1580
,2392
,3663
,6038
) - Added the
~scrapy.extensions.periodic_log.PeriodicLog
extension which can be enabled to log stats and/or their differences periodically. (5926
) - Optimized the memory usage in
TextResponse.json <scrapy.http.TextResponse.json>
by removing unnecessary body decoding. (5968
,6016
) - Links to
.webp
files are now ignored bylink extractors <topics-link-extractors>
. (6021
)
- Fixed logging enabled add-ons. (
6036
) - Fixed
~scrapy.mail.MailSender
producing invalid message bodies when thecharset
argument is passed to~scrapy.mail.MailSender.send
. (5096
,5118
) - Fixed an exception when accessing
self.EXCEPTIONS_TO_RETRY
from a subclass of~scrapy.downloadermiddlewares.retry.RetryMiddleware
. (6049
,6050
) scrapy.settings.BaseSettings.getdictorlist
, used to parseFEED_EXPORT_FIELDS
, now handles tuple values. (6011
,6013
)- Calls to
datetime.utcnow()
, no longer recommended to be used, have been replaced with calls todatetime.now()
with a timezone. (6014
)
- Updated a deprecated function call in a pipeline example. (
6008
,6009
)
- Extended typing hints. (
6003
,6005
,6031
,6034
) - Pinned brotli to 1.0.9 for the PyPy tests as 1.1.0 breaks them. (
6044
,6045
) - Other CI and pre-commit improvements. (
6002
,6013
,6046
)
Marked Twisted >= 23.8.0
as unsupported. (6024
, 6026
)
Highlights:
- Added Python 3.12 support, dropped Python 3.7 support.
- The new add-ons framework simplifies configuring 3rd-party components that support it.
- Exceptions to retry can now be configured.
- Many fixes and improvements for feed exports.
- Dropped support for Python 3.7. (
5953
) - Added support for the upcoming Python 3.12. (
5984
) - Minimum versions increased for these dependencies:
- lxml: 4.3.0 → 4.4.1
- cryptography: 3.4.6 → 36.0.0
pkg_resources
is no longer used. (5956
,5958
)- boto3 is now recommended instead of botocore for exporting to S3. (
5833
).
- The value of the
FEED_STORE_EMPTY
setting is nowTrue
instead ofFalse
. In earlier Scrapy versions empty files were created even when this setting wasFalse
(which was a bug that is now fixed), so the new default should keep the old behavior. (872
,5847
)
- When a function is assigned to the
FEED_URI_PARAMS
setting, returningNone
or modifying theparams
input parameter, deprecated in Scrapy 2.6, is no longer supported. (5994
,5996
) - The
scrapy.utils.reqser
module, deprecated in Scrapy 2.6, is removed. (5994
,5996
) - The
scrapy.squeues
classesPickleFifoDiskQueueNonRequest
,PickleLifoDiskQueueNonRequest
,MarshalFifoDiskQueueNonRequest
, andMarshalLifoDiskQueueNonRequest
, deprecated in Scrapy 2.6, are removed. (5994
,5996
) - The property
open_spiders
and the methodshas_capacity
andschedule
ofscrapy.core.engine.ExecutionEngine
, deprecated in Scrapy 2.6, are removed. (5994
,5998
) - Passing a
spider
argument to the~scrapy.core.engine.ExecutionEngine.spider_is_idle
,~scrapy.core.engine.ExecutionEngine.crawl
and~scrapy.core.engine.ExecutionEngine.download
methods ofscrapy.core.engine.ExecutionEngine
, deprecated in Scrapy 2.6, is no longer supported. (5994
,5998
)
scrapy.utils.datatypes.CaselessDict
is deprecated, usescrapy.utils.datatypes.CaseInsensitiveDict
instead. (5146
)- Passing the
custom
argument toscrapy.utils.conf.build_component_list
is deprecated, it was used in the past to mergeFOO
andFOO_BASE
setting values but now Scrapy usesscrapy.settings.BaseSettings.getwithbase
to do the same. Code that uses this argument and cannot be switched togetwithbase()
can be switched to merging the values explicitly. (5726
,5923
)
- Added support for
Scrapy add-ons <topics-addons>
. (5950
) - Added the
RETRY_EXCEPTIONS
setting that configures which exceptions will be retried by~scrapy.downloadermiddlewares.retry.RetryMiddleware
. (2701
,5929
) - Added the possiiblity to close the spider if no items were produced in the specified time, configured by
CLOSESPIDER_TIMEOUT_NO_ITEM
. (5979
) - Added support for the
AWS_REGION_NAME
setting to feed exports. (5980
) - Added support for using
pathlib.Path
objects that refer to absolute Windows paths in theFEEDS
setting. (5939
)
- Fixed creating empty feeds even with
FEED_STORE_EMPTY=False
. (872
,5847
) - Fixed using absolute Windows paths when specifying output files. (
5969
,5971
) - Fixed problems with uploading large files to S3 by switching to multipart uploads (requires boto3). (
960
,5735
,5833
) - Fixed the JSON exporter writing extra commas when some exceptions occur. (
3090
,5952
) - Fixed the "read of closed file" error in the CSV exporter. (
5043
,5705
) - Fixed an error when a component added by the class object throws
~scrapy.exceptions.NotConfigured
with a message. (5950
,5992
) - Added the missing
scrapy.settings.BaseSettings.pop
method. (5959
,5960
,5963
) - Added
~scrapy.utils.datatypes.CaseInsensitiveDict
as a replacement for~scrapy.utils.datatypes.CaselessDict
that fixes some API inconsistencies. (5146
)
- Documented
scrapy.Spider.update_settings
. (5745
,5846
) - Documented possible problems with early Twisted reactor installation and their solutions. (
5981
,6000
) - Added examples of making additional requests in callbacks. (
5927
) - Improved the feed export docs. (
5579
,5931
) - Clarified the docs about request objects on redirection. (
5707
,5937
)
- Added support for running tests against the installed Scrapy version. (
4914
,5949
) - Extended typing hints. (
5925
,5977
) - Fixed the
test_utils_asyncio.AsyncioTest.test_set_asyncio_event_loop
test. (5951
) - Fixed the
test_feedexport.BatchDeliveriesTest.test_batch_path_differ
test on Windows. (5847
) - Enabled CI runs for Python 3.11 on Windows. (
5999
) - Simplified skipping tests that depend on
uvloop
. (5984
) - Fixed the
extra-deps-pinned
tox env. (5948
) - Implemented cleanups. (
5965
,5986
)
Highlights:
- Per-domain download settings.
- Compatibility with new cryptography and new parsel.
- JMESPath selectors from the new parsel.
- Bug fixes.
scrapy.extensions.feedexport._FeedSlot
is renamed toscrapy.extensions.feedexport.FeedSlot
and the old name is deprecated. (5876
)
- Settings corresponding to
DOWNLOAD_DELAY
,CONCURRENT_REQUESTS_PER_DOMAIN
andRANDOMIZE_DOWNLOAD_DELAY
can now be set on a per-domain basis via the newDOWNLOAD_SLOTS
setting. (5328
) - Added
TextResponse.jmespath
, a shortcut for JMESPath selectors available since parsel 1.8.1. (5894
,5915
) - Added
feed_slot_closed
andfeed_exporter_closed
signals. (5876
) - Added
scrapy.utils.request.request_to_curl
, a function to produce a curl command from a~scrapy.Request
object. (5892
) - Values of
FILES_STORE
andIMAGES_STORE
can now bepathlib.Path
instances. (5801
)
- Fixed a warning with Parsel 1.8.1+. (
5903
,5918
) - Fixed an error when using feed postprocessing with S3 storage. (
5500
,5581
) - Added the missing
scrapy.settings.BaseSettings.setdefault
method. (5811
,5821
) - Fixed an error when using cryptography 40.0.0+ and
DOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING
is enabled. (5857
,5858
) - The checksums returned by
~scrapy.pipelines.files.FilesPipeline
for files on Google Cloud Storage are no longer Base64-encoded. (5874
,5891
) scrapy.utils.request.request_from_curl
now supports $-prefixed string values for the curl--data-raw
argument, which are produced by browsers for data that includes certain symbols. (5899
,5901
)- The
parse
command now also works with async generator callbacks. (5819
,5824
) - The
genspider
command now properly works with HTTPS URLs. (3553
,5808
) - Improved handling of asyncio loops. (
5831
,5832
) LinkExtractor <scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor>
now skips certain malformed URLs instead of raising an exception. (5881
)scrapy.utils.python.get_func_args
now supports more types of callables. (5872
,5885
)- Fixed an error when processing non-UTF8 values of
Content-Type
headers. (5914
,5917
) - Fixed an error breaking user handling of send failures in
scrapy.mail.MailSender.send()
. (1611
,5880
)
- Expanded contributing docs. (
5109
,5851
) - Added blacken-docs to pre-commit and reformatted the docs with it. (
5813
,5816
) - Fixed a JS issue. (
5875
,5877
) - Fixed
make htmlview
. (5878
,5879
) - Fixed typos and other small errors. (
5827
,5839
,5883
,5890
,5895
,5904
)
- Extended typing hints. (
5805
,5889
,5896
) - Tests for most of the examples in the docs are now run as a part of CI, found problems were fixed. (
5816
,5826
,5919
) - Removed usage of deprecated Python classes. (
5849
) - Silenced
include-ignored
warnings from coverage. (5820
) - Fixed a random failure of the
test_feedexport.test_batch_path_differ
test. (5855
,5898
) - Updated docstrings to match output produced by parsel 1.8.1 so that they don't cause test failures. (
5902
,5919
) - Other CI and pre-commit improvements. (
5802
,5823
,5908
)
This is a maintenance release, with minor features, bug fixes, and cleanups.
- The
scrapy.utils.gz.read1
function, deprecated in Scrapy 2.0, has now been removed. Use the~io.BufferedIOBase.read1
method of~gzip.GzipFile
instead. (5719
) - The
scrapy.utils.python.to_native_str
function, deprecated in Scrapy 2.0, has now been removed. Usescrapy.utils.python.to_unicode
instead. (5719
) - The
scrapy.utils.python.MutableChain.next
method, deprecated in Scrapy 2.0, has now been removed. Use~scrapy.utils.python.MutableChain.__next__
instead. (5719
) - The
scrapy.linkextractors.FilteringLinkExtractor
class, deprecated in Scrapy 2.0, has now been removed. UseLinkExtractor <scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor>
instead. (5720
) - Support for using environment variables prefixed with
SCRAPY_
to override settings, deprecated in Scrapy 2.0, has now been removed. (5724
) - Support for the
noconnect
query string argument in proxy URLs, deprecated in Scrapy 2.0, has now been removed. We expect proxies that used to need it to work fine without it. (5731
) - The
scrapy.utils.python.retry_on_eintr
function, deprecated in Scrapy 2.3, has now been removed. (5719
) - The
scrapy.utils.python.WeakKeyCache
class, deprecated in Scrapy 2.4, has now been removed. (5719
) - The
scrapy.utils.boto.is_botocore()
function, deprecated in Scrapy 2.4, has now been removed. (5719
)
scrapy.pipelines.images.NoimagesDrop
is now deprecated. (5368
,5489
)ImagesPipeline.convert_image <scrapy.pipelines.images.ImagesPipeline.convert_image>
must now accept aresponse_body
parameter. (3055
,3689
,4753
)
- Applied black coding style to files generated with the
genspider
andstartproject
commands. (5809
,5814
) FEED_EXPORT_ENCODING
is now set to"utf-8"
in thesettings.py
file that thestartproject
command generates. With this value, JSON exports won’t force the use of escape sequences for non-ASCII characters. (5797
,5800
)- The
~scrapy.extensions.memusage.MemoryUsage
extension now logs the peak memory usage during checks, and the binary unit MiB is now used to avoid confusion. (5717
,5722
,5727
) - The
callback
parameter of~scrapy.http.Request
can now be set toscrapy.http.request.NO_CALLBACK
, to distinguish it fromNone
, as the latter indicates that the default spider callback (~scrapy.Spider.parse
) is to be used. (5798
)
- Enabled unsafe legacy SSL renegotiation to fix access to some outdated websites. (
5491
,5790
) - Fixed STARTTLS-based email delivery not working with Twisted 21.2.0 and better. (
5386
,5406
) - Fixed the
finish_exporting
method ofitem exporters <topics-exporters>
not being called for empty files. (5537
,5758
) - Fixed HTTP/2 responses getting only the last value for a header when multiple headers with the same name are received. (
5777
) - Fixed an exception raised by the
shell
command on some cases whenusing asyncio <using-asyncio>
. (5740
,5742
,5748
,5759
,5760
,5771
) - When using
~scrapy.spiders.CrawlSpider
, callback keyword arguments (cb_kwargs
) added to a request in theprocess_request
callback of a~scrapy.spiders.Rule
will no longer be ignored. (5699
) - The
images pipeline <images-pipeline>
no longer re-encodes JPEG files. (3055
,3689
,4753
) - Fixed the handling of transparent WebP images by the
images pipeline <images-pipeline>
. (3072
,5766
,5767
) scrapy.shell.inspect_response
no longer inhibitsSIGINT
(Ctrl+C). (2918
)LinkExtractor <scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor>
withunique=False
no longer filters out links that have identical URL and text. (3798
,3799
,4695
,5458
)~scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware
now ignores URL protocols that do not supportrobots.txt
(data://
,file://
). (5807
)- Silenced the
filelock
debug log messages introduced in Scrapy 2.6. (5753
,5754
) - Fixed the output of
scrapy -h
showing an unintended**commands**
line. (5709
,5711
,5712
) - Made the active project indication in the output of
commands <topics-commands>
more clear. (5715
)
- Documented how to
debug spiders from Visual Studio Code <debug-vscode>
. (5721
) - Documented how
DOWNLOAD_DELAY
affects per-domain concurrency. (5083
,5540
) - Improved consistency. (
5761
) - Fixed typos. (
5714
,5744
,5764
)
- Applied
black coding style <coding-style>
, sorted import statements, and introducedpre-commit <scrapy-pre-commit>
. (4654
,4658
,5734
,5737
,5806
,5810
) - Switched from
os.path
topathlib
. (4916
,4497
,5682
) - Addressed many issues reported by Pylint. (
5677
) - Improved code readability. (
5736
) - Improved package metadata. (
5768
) - Removed direct invocations of
setup.py
. (5774
,5776
) - Removed unnecessary
~collections.OrderedDict
usages. (5795
) - Removed unnecessary
__str__
definitions. (5150
) - Removed obsolete code and comments. (
5725
,5729
,5730
,5732
) - Fixed test and CI issues. (
5749
,5750
,5756
,5762
,5765
,5780
,5781
,5782
,5783
,5785
,5786
)
- Relaxed the restriction introduced in 2.6.2 so that the
Proxy-Authorization
header can again be set explicitly, as long as the proxy URL in theproxy
metadata has no other credentials, and for as long as that proxy URL remains the same; this restores compatibility with scrapy-zyte-smartproxy 2.1.0 and older (5626
).
- Using
-O
/--overwrite-output
and-t
/--output-format
options together now produces an error instead of ignoring the former option (5516
,5605
). - Replaced deprecated
asyncio
APIs that implicitly use the current event loop with code that explicitly requests a loop from the event loop policy (5685
,5689
). - Fixed uses of deprecated Scrapy APIs in Scrapy itself (
5588
,5589
). - Fixed uses of a deprecated Pillow API (
5684
,5692
). - Improved code that checks if generators return values, so that it no longer fails on decorated methods and partial methods (
5323
,5592
,5599
,5691
).
- Upgraded the Code of Conduct to Contributor Covenant v2.1 (
5698
). - Fixed typos (
5681
,5694
).
- Re-enabled some erroneously disabled flake8 checks (
5688
). - Ignored harmless deprecation warnings from
typing
in tests (5686
,5697
). - Modernized our CI configuration (
5695
,5696
).
Highlights:
- Added Python 3.11 support, dropped Python 3.6 support
- Improved support for
asynchronous callbacks <topics-coroutines>
Asyncio support <using-asyncio>
is enabled by default on new projects- Output names of item fields can now be arbitrary strings
- Centralized
request fingerprinting <request-fingerprints>
configuration is now possible
Python 3.7 or greater is now required; support for Python 3.6 has been dropped. Support for the upcoming Python 3.11 has been added.
The minimum required version of some dependencies has changed as well:
- lxml: 3.5.0 → 4.3.0
- Pillow (
images pipeline <images-pipeline>
): 4.0.0 → 7.1.0 - zope.interface: 5.0.0 → 5.1.0
(5512
, 5514
, 5524
, 5563
, 5664
, 5670
, 5678
)
ImagesPipeline.thumb_path <scrapy.pipelines.images.ImagesPipeline.thumb_path>
must now accept anitem
parameter (5504
,5508
).- The
scrapy.downloadermiddlewares.decompression
module is now deprecated (5546
,5547
).
- The
~scrapy.spidermiddlewares.SpiderMiddleware.process_spider_output
method ofspider middlewares <topics-spider-middleware>
can now be defined as anasynchronous generator
(4978
). - The output of
~scrapy.Request
callbacks defined ascoroutines <topics-coroutines>
is now processed asynchronously (4978
). ~scrapy.spiders.crawl.CrawlSpider
now supportsasynchronous callbacks <topics-coroutines>
(5657
).- New projects created with the
startproject
command haveasyncio support <using-asyncio>
enabled by default (5590
,5679
). - The
FEED_EXPORT_FIELDS
setting can now be defined as a dictionary to customize the output name of item fields, lifting the restriction that required output names to be valid Python identifiers, e.g. preventing them to have whitespace (1008
,3266
,3696
). - You can now customize
request fingerprinting <request-fingerprints>
through the newREQUEST_FINGERPRINTER_CLASS
setting, instead of having to change it on every Scrapy component that relies on request fingerprinting (900
,3420
,4113
,4762
,4524
). jsonl
is now supported and encouraged as a file extension for JSON Lines files (4848
).ImagesPipeline.thumb_path <scrapy.pipelines.images.ImagesPipeline.thumb_path>
now receives the sourceitem <topics-items>
(5504
,5508
).
- When using Google Cloud Storage with a
media pipeline <topics-media-pipeline>
,FILES_EXPIRES
now also works whenFILES_STORE
does not point at the root of your Google Cloud Storage bucket (5317
,5318
). - The
parse
command now supportsasynchronous callbacks <topics-coroutines>
(5424
,5577
). - When using the
parse
command with a URL for which there is no available spider, an exception is no longer raised (3264
,3265
,5375
,5376
,5497
). ~scrapy.http.TextResponse
now gives higher priority to the byte order mark when determining the text encoding of the response body, following the HTML living standard (5601
,5611
).- MIME sniffing takes the response body into account in FTP and HTTP/1.0 requests, as well as in cached requests (
4873
). - MIME sniffing now detects valid HTML 5 documents even if the
html
tag is missing (4873
). - An exception is now raised if
ASYNCIO_EVENT_LOOP
has a value that does not match the asyncio event loop actually installed (5529
). - Fixed
Headers.getlist <scrapy.http.headers.Headers.getlist>
returning only the last header (5515
,5526
). - Fixed
LinkExtractor <scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor>
not ignoring thetar.gz
file extension by default (1837
,2067
,4066
)
- Clarified the return type of
Spider.parse <scrapy.Spider.parse>
(5602
,5608
). - To enable
~scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware
to do brotli compression, installing brotli is now recommended instead of installing brotlipy, as the former provides a more recent version of brotli. Signal documentation <topics-signals>
now mentionscoroutine support <topics-coroutines>
and uses it in code examples (4852
,5358
).bans
now recommends Common Crawl instead of Google cache (3582
,5432
).- The new
topics-components
topic covers enforcing requirements on Scrapy components, likedownloader middlewares <topics-downloader-middleware>
,extensions <topics-extensions>
,item pipelines <topics-item-pipeline>
,spider middlewares <topics-spider-middleware>
, and more;enforce-asyncio-requirement
has also been added (4978
). topics-settings
now indicates that setting values must bepicklable <pickle-picklable>
(5607
,5629
).- Removed outdated documentation (
5446
,5373
,5369
,5370
,5554
). - Fixed typos (
5442
,5455
,5457
,5461
,5538
,5553
,5558
,5624
,5631
). - Fixed other issues (
5283
,5284
,5559
,5567
,5648
,5659
,5665
).
- Added a continuous integration job to run twine check (
5655
,5656
). - Addressed test issues and warnings (
5560
,5561
,5612
,5617
,5639
,5645
,5662
,5671
,5675
). - Cleaned up code (
4991
,4995
,5451
,5487
,5542
,5667
,5668
,5672
). - Applied minor code improvements (
5661
).
- Added support for pyOpenSSL 22.1.0, removing support for SSLv3 (
5634
,5635
,5636
). Upgraded the minimum versions of the following dependencies:
- cryptography: 2.0 → 3.3
- pyOpenSSL: 16.2.0 → 21.0.0
- service_identity: 16.0.0 → 18.1.0
- Twisted: 17.9.0 → 18.9.0
- zope.interface: 4.1.3 → 5.0.0
(
5621
,5632
)- Fixes test and documentation issues (
5612
,5617
,5631
).
Security bug fix:
When
~scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware
processes a request withproxy
metadata, and thatproxy
metadata includes proxy credentials,~scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware
sets theProxy-Authorization
header, but only if that header is not already set.There are third-party proxy-rotation downloader middlewares that set different
proxy
metadata every time they process a request.Because of request retries and redirects, the same request can be processed by downloader middlewares more than once, including both
~scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware
and any third-party proxy-rotation downloader middleware.These third-party proxy-rotation downloader middlewares could change the
proxy
metadata of a request to a new value, but fail to remove theProxy-Authorization
header from the previous value of theproxy
metadata, causing the credentials of one proxy to be sent to a different proxy.To prevent the unintended leaking of proxy credentials, the behavior of
~scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware
is now as follows when processing a request:- If the request being processed defines
proxy
metadata that includes credentials, theProxy-Authorization
header is always updated to feature those credentials. If the request being processed defines
proxy
metadata without credentials, theProxy-Authorization
header is removed unless it was originally defined for the same proxy URL.To remove proxy credentials while keeping the same proxy URL, remove the
Proxy-Authorization
header.If the request has no
proxy
metadata, or that metadata is a falsy value (e.g.None
), theProxy-Authorization
header is removed.It is no longer possible to set a proxy URL through the
proxy
metadata but set the credentials through theProxy-Authorization
header. Set proxy credentials through theproxy
metadata instead.
- If the request being processed defines
Also fixes the following regressions introduced in 2.6.0:
~scrapy.crawler.CrawlerProcess
supports again crawling multiple spiders (5435
,5436
)- Installing a Twisted reactor before Scrapy does (e.g. importing
twisted.internet.reactor
somewhere at the module level) no longer prevents Scrapy from starting, as long as a different reactor is not specified inTWISTED_REACTOR
(5525
,5528
) - Fixed an exception that was being logged after the spider finished under certain conditions (
5437
,5440
) - The
--output
/-o
command-line parameter supports again a value starting with a hyphen (5444
,5445
) - The
scrapy parse -h
command no longer throws an error (5481
,5482
)
Fixes a regression introduced in 2.6.0 that would unset the request method when following redirects.
Highlights:
Security fixes for cookie handling <2.6-security-fixes>
- Python 3.10 support
asyncio support <using-asyncio>
is no longer considered experimental, and works out-of-the-box on Windows regardless of your Python version- Feed exports now support
pathlib.Path
output paths and per-feeditem filtering <item-filter>
andpost-processing <post-processing>
When a
~scrapy.http.Request
object with cookies defined gets a redirect response causing a new~scrapy.http.Request
object to be scheduled, the cookies defined in the original~scrapy.http.Request
object are no longer copied into the new~scrapy.http.Request
object.If you manually set the
Cookie
header on a~scrapy.http.Request
object and the domain name of the redirect URL is not an exact match for the domain of the URL of the original~scrapy.http.Request
object, yourCookie
header is now dropped from the new~scrapy.http.Request
object.The old behavior could be exploited by an attacker to gain access to your cookies. Please, see the cjvr-mfj7-j4j8 security advisory for more information.
Note
It is still possible to enable the sharing of cookies between different domains with a shared domain suffix (e.g.
example.com
and any subdomain) by defining the shared domain suffix (e.g.example.com
) as the cookie domain when defining your cookies. See the documentation of the~scrapy.http.Request
class for more information.When the domain of a cookie, either received in the
Set-Cookie
header of a response or defined in a~scrapy.http.Request
object, is set to a public suffix, the cookie is now ignored unless the cookie domain is the same as the request domain.The old behavior could be exploited by an attacker to inject cookies from a controlled domain into your cookiejar that could be sent to other domains not controlled by the attacker. Please, see the mfjm-vh54-3f96 security advisory for more information.
- The h2 dependency is now optional, only needed to
enable HTTP/2 support <http2>
. (5113
)
- The
formdata
parameter of~scrapy.FormRequest
, if specified for a non-POST request, now overrides the URL query string, instead of being appended to it. (2919
,3579
) - When a function is assigned to the
FEED_URI_PARAMS
setting, now the return value of that function, and not theparams
input parameter, will determine the feed URI parameters, unless that return value isNone
. (4962
,4966
) In
scrapy.core.engine.ExecutionEngine
, methods~scrapy.core.engine.ExecutionEngine.crawl
,~scrapy.core.engine.ExecutionEngine.download
,~scrapy.core.engine.ExecutionEngine.schedule
, and~scrapy.core.engine.ExecutionEngine.spider_is_idle
now raiseRuntimeError
if called before~scrapy.core.engine.ExecutionEngine.open_spider
. (5090
)These methods used to assume that
ExecutionEngine.slot <scrapy.core.engine.ExecutionEngine.slot>
had been defined by a prior call to~scrapy.core.engine.ExecutionEngine.open_spider
, so they were raisingAttributeError
instead.- If the API of the configured
scheduler <topics-scheduler>
does not meet expectations,TypeError
is now raised at startup time. Before, other exceptions would be raised at run time. (3559
) - The
_encoding
field of serialized~scrapy.http.Request
objects is now namedencoding
, in line with all other fields (5130
)
scrapy.http.TextResponse.body_as_unicode
, deprecated in Scrapy 2.2, has now been removed. (5393
)scrapy.item.BaseItem
, deprecated in Scrapy 2.2, has now been removed. (5398
)scrapy.item.DictItem
, deprecated in Scrapy 1.8, has now been removed. (5398
)scrapy.Spider.make_requests_from_url
, deprecated in Scrapy 1.4, has now been removed. (4178
,4356
)
- When a function is assigned to the
FEED_URI_PARAMS
setting, returningNone
or modifying theparams
input parameter is now deprecated. Return a new dictionary instead. (4962
,4966
) scrapy.utils.reqser
is deprecated. (5130
)- Instead of
~scrapy.utils.reqser.request_to_dict
, use the newRequest.to_dict <scrapy.http.Request.to_dict>
method. - Instead of
~scrapy.utils.reqser.request_from_dict
, use the newscrapy.utils.request.request_from_dict
function.
- Instead of
- In
scrapy.squeues
, the following queue classes are deprecated:~scrapy.squeues.PickleFifoDiskQueueNonRequest
,~scrapy.squeues.PickleLifoDiskQueueNonRequest
,~scrapy.squeues.MarshalFifoDiskQueueNonRequest
, and~scrapy.squeues.MarshalLifoDiskQueueNonRequest
. You should instead use:~scrapy.squeues.PickleFifoDiskQueue
,~scrapy.squeues.PickleLifoDiskQueue
,~scrapy.squeues.MarshalFifoDiskQueue
, and~scrapy.squeues.MarshalLifoDiskQueue
. (5117
) - Many aspects of
scrapy.core.engine.ExecutionEngine
that come from a time when this class could handle multiple~scrapy.Spider
objects at a time have been deprecated. (5090
)- The
~scrapy.core.engine.ExecutionEngine.has_capacity
method is deprecated. - The
~scrapy.core.engine.ExecutionEngine.schedule
method is deprecated, use~scrapy.core.engine.ExecutionEngine.crawl
or~scrapy.core.engine.ExecutionEngine.download
instead. - The
~scrapy.core.engine.ExecutionEngine.open_spiders
attribute is deprecated, use~scrapy.core.engine.ExecutionEngine.spider
instead. The
spider
parameter is deprecated for the following methods:~scrapy.core.engine.ExecutionEngine.spider_is_idle
~scrapy.core.engine.ExecutionEngine.crawl
~scrapy.core.engine.ExecutionEngine.download
Instead, call
~scrapy.core.engine.ExecutionEngine.open_spider
first to set the~scrapy.Spider
object.
- The
scrapy.utils.response.response_httprepr
is now deprecated. (4972
)
- You can now use
item filtering <item-filter>
to control which items are exported to each output feed. (4575
,5178
,5161
,5203
) - You can now apply
post-processing <post-processing>
to feeds, andbuilt-in post-processing plugins <builtin-plugins>
are provided for output file compression. (2174
,5168
,5190
) - The
FEEDS
setting now supportspathlib.Path
objects as keys. (5383
,5384
) - Enabling
asyncio <using-asyncio>
while using Windows and Python 3.8 or later will automatically switch the asyncio event loop to one that allows Scrapy to work. Seeasyncio-windows
. (4976
,5315
) - The
genspider
command now supports a start URL instead of a domain name. (4439
) scrapy.utils.defer
gained 2 new functions,~scrapy.utils.defer.deferred_to_future
and~scrapy.utils.defer.maybe_deferred_to_future
, to helpawait on Deferreds when using the asyncio reactor <asyncio-await-dfd>
. (5288
)Amazon S3 feed export storage <topics-feed-storage-s3>
gained support for temporary security credentials (AWS_SESSION_TOKEN
) and endpoint customization (AWS_ENDPOINT_URL
). (4998
,5210
)- New
LOG_FILE_APPEND
setting to allow truncating the log file. (5279
) Request.cookies <scrapy.Request.cookies>
values that arebool
,float
orint
are cast tostr
. (5252
,5253
)- You may now raise
~scrapy.exceptions.CloseSpider
from a handler of thespider_idle
signal to customize the reason why the spider is stopping. (5191
) - When using
~scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware
, the proxy URL for non-HTTPS HTTP/1.1 requests no longer needs to include a URL scheme. (4505
,4649
) All built-in queues now expose a
peek
method that returns the next queue object (likepop
) but does not remove the returned object from the queue. (5112
)If the underlying queue does not support peeking (e.g. because you are not using
queuelib
1.6.1 or later), thepeek
method raisesNotImplementedError
.~scrapy.http.Request
and~scrapy.http.Response
now have anattributes
attribute that makes subclassing easier. For~scrapy.http.Request
, it also allows subclasses to work withscrapy.utils.request.request_from_dict
. (1877
,5130
,5218
)- The
~scrapy.core.scheduler.BaseScheduler.open
and~scrapy.core.scheduler.BaseScheduler.close
methods of thescheduler <topics-scheduler>
are now optional. (3559
) - HTTP/1.1
~scrapy.core.downloader.handlers.http11.TunnelError
exceptions now only truncate response bodies longer than 1000 characters, instead of those longer than 32 characters, making it easier to debug such errors. (4881
,5007
) ~scrapy.loader.ItemLoader
now supports non-text responses. (5145
,5269
)
- The
TWISTED_REACTOR
andASYNCIO_EVENT_LOOP
settings are no longer ignored if defined in~scrapy.Spider.custom_settings
. (4485
,5352
) - Removed a module-level Twisted reactor import that could prevent
using the asyncio reactor <using-asyncio>
. (5357
) - The
startproject
command works with existing folders again. (4665
,4676
) - The
FEED_URI_PARAMS
setting now behaves as documented. (4962
,4966
) Request.cb_kwargs <scrapy.Request.cb_kwargs>
once again allows thecallback
keyword. (5237
,5251
,5264
)- Made
scrapy.utils.response.open_in_browser
support more complex HTML. (5319
,5320
) - Fixed
CSVFeedSpider.quotechar <scrapy.spiders.CSVFeedSpider.quotechar>
being interpreted as the CSV file encoding. (5391
,5394
) - Added missing setuptools to the list of dependencies. (
5122
) LinkExtractor <scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor>
now also works as expected with links that have comma-separatedrel
attribute values includingnofollow
. (5225
)- Fixed a
TypeError
that could be raised duringfeed export <topics-feed-exports>
parameter parsing. (5359
)
asyncio support <using-asyncio>
is no longer considered experimental. (5332
)- Included
Windows-specific help for asyncio usage <asyncio-windows>
. (4976
,5315
) - Rewrote
topics-headless-browsing
with up-to-date best practices. (4484
,4613
) - Documented
local file naming in media pipelines <topics-file-naming>
. (5069
,5152
) faq
now covers spider file name collision issues. (2680
,3669
)- Provided better context and instructions to disable the
URLLENGTH_LIMIT
setting. (5135
,5250
) - Documented that
reppy-parser
does not support Python 3.9+. (5226
,5231
) - Documented
the scheduler component <topics-scheduler>
. (3537
,3559
) - Documented the method used by
media pipelines <topics-media-pipeline>
todetermine if a file has expired <file-expiration>
. (5120
,5254
) run-multiple-spiders
now featuresscrapy.utils.project.get_project_settings
usage. (5070
)run-multiple-spiders
now covers what happens when you define different per-spider values for some settings that cannot differ at run time. (4485
,5352
)- Extended the documentation of the
~scrapy.extensions.statsmailer.StatsMailer
extension. (5199
,5217
) - Added
JOBDIR
totopics-settings
. (5173
,5224
) - Documented
Spider.attribute <scrapy.Spider.attribute>
. (5174
,5244
) - Documented
TextResponse.urljoin <scrapy.http.TextResponse.urljoin>
. (1582
) - Added the
body_length
parameter to the documented signature of theheaders_received
signal. (5270
) - Clarified
SelectorList.get <scrapy.selector.SelectorList.get>
usage in thetutorial <intro-tutorial>
. (5256
) - The documentation now features the shortest import path of classes with multiple import paths. (
2733
,5099
) quotes.toscrape.com
references now use HTTPS instead of HTTP. (5395
,5396
)- Added a link to our Discord server to
getting-help
. (5421
,5422
) - The pronunciation of the project name is now
officially <intro-overview>
/ˈskreɪpaɪ/. (5280
,5281
) - Added the Scrapy logo to the README. (
5255
,5258
) - Fixed issues and implemented minor improvements. (
3155
,4335
,5074
,5098
,5134
,5180
,5194
,5239
,5266
,5271
,5273
,5274
,5276
,5347
,5356
,5414
,5415
,5416
,5419
,5420
)
- Added support for Python 3.10. (
5212
,5221
,5265
) - Significantly reduced memory usage by
scrapy.utils.response.response_httprepr
, used by the~scrapy.downloadermiddlewares.stats.DownloaderStats
downloader middleware, which is enabled by default. (4964
,4972
) - Removed uses of the deprecated
optparse
module. (5366
,5374
) - Extended typing hints. (
5077
,5090
,5100
,5108
,5171
,5215
,5334
) - Improved tests, fixed CI issues, removed unused code. (
5094
,5157
,5162
,5198
,5207
,5208
,5229
,5298
,5299
,5310
,5316
,5333
,5388
,5389
,5400
,5401
,5404
,5405
,5407
,5410
,5412
,5425
,5427
) - Implemented improvements for contributors. (
5080
,5082
,5177
,5200
) - Implemented cleanups. (
5095
,5106
,5209
,5228
,5235
,5245
,5246
,5292
,5314
,5322
)
Security bug fix:
If you use
~scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware
(i.e. thehttp_user
andhttp_pass
spider attributes) for HTTP authentication, any request exposes your credentials to the request target.To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute,
http_auth_domain
, and point it to the specific domain to which the authentication credentials must be sent.If the
http_auth_domain
spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.If you need to send the same HTTP authentication credentials to multiple domains, you can use
w3lib.http.basic_auth_header
instead to set the value of theAuthorization
header of your requests.If you really want your spider to send the same HTTP authentication credentials to any domain, set the
http_auth_domain
spider attribute toNone
.Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.
Highlights:
- Official Python 3.9 support
- Experimental
HTTP/2 support <http2>
- New
~scrapy.downloadermiddlewares.retry.get_retry_request
function to retry requests from spider callbacks - New
~scrapy.signals.headers_received
signal that allows stopping downloads early - New
Response.protocol <scrapy.http.Response.protocol>
attribute
- Removed all code that
was deprecated in 1.7.0 <1.7-deprecations>
and had notalready been removed in 2.4.0 <2.4-deprecation-removals>
. (4901
) - Removed support for the
SCRAPY_PICKLED_SETTINGS_TO_OVERRIDE
environment variable,deprecated in 1.8.0 <1.8-deprecations>
. (4912
)
- The
scrapy.utils.py36
module is now deprecated in favor ofscrapy.utils.asyncgen
. (4900
)
- Experimental
HTTP/2 support <http2>
through a new download handler that can be assigned to thehttps
protocol in theDOWNLOAD_HANDLERS
setting. (1854
,4769
,5058
,5059
,5066
) - The new
scrapy.downloadermiddlewares.retry.get_retry_request
function may be used from spider callbacks or middlewares to handle the retrying of a request beyond the scenarios that~scrapy.downloadermiddlewares.retry.RetryMiddleware
supports. (3590
,3685
,4902
) - The new
~scrapy.signals.headers_received
signal gives early access to response headers and allowsstopping downloads <topics-stop-response-download>
. (1772
,4897
) - The new
Response.protocol <scrapy.http.Response.protocol>
attribute gives access to the string that identifies the protocol used to download a response. (4878
) Stats <topics-stats>
now include the following entries that indicate the number of successes and failures in storingfeeds <topics-feed-exports>
:feedexport/success_count/<storage type> feedexport/failed_count/<storage type>
Where
<storage type>
is the feed storage backend class name, such as~scrapy.extensions.feedexport.FileFeedStorage
or~scrapy.extensions.feedexport.FTPFeedStorage
.(
3947
,4850
)The
~scrapy.spidermiddlewares.urllength.UrlLengthMiddleware
spider middleware now logs ignored URLs withINFO
logging level <levels>
instead ofDEBUG
, and it now includes the following entry intostats <topics-stats>
to keep track of the number of ignored URLs:urllength/request_ignored_count
(
5036
)The
~scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware
downloader middleware now logs the number of decompressed responses and the total count of resulting bytes:httpcompression/response_bytes httpcompression/response_count
(
4797
,4799
)
- Fixed installation on PyPy installing PyDispatcher in addition to PyPyDispatcher, which could prevent Scrapy from working depending on which package got imported. (
4710
,4814
) - When inspecting a callback to check if it is a generator that also returns a value, an exception is no longer raised if the callback has a docstring with lower indentation than the following code. (
4477
,4935
) - The Content-Length header is no longer omitted from responses when using the default, HTTP/1.1 download handler (see
DOWNLOAD_HANDLERS
). (5009
,5034
,5045
,5057
,5062
) - Setting the
handle_httpstatus_all
request meta key toFalse
now has the same effect as not setting it at all, instead of having the same effect as setting it toTrue
. (3851
,4694
)
- Added instructions to
install Scrapy in Windows using pip <intro-install-windows>
. (4715
,4736
) - Logging documentation now includes
additional ways to filter logs <topics-logging-advanced-customization>
. (4216
,4257
,4965
) - Covered how to deal with long lists of allowed domains in the
FAQ <faq>
. (2263
,3667
) - Covered scrapy-bench in
benchmarking
. (4996
,5016
) - Clarified that one
extension <topics-extensions>
instance is created per crawler. (5014
) - Fixed some errors in examples. (
4829
,4830
,4907
,4909
,5008
) - Fixed some external links, typos, and so on. (
4892
,4899
,4936
,4942
,5005
,5063
) - The
list of Request.meta keys <topics-request-meta>
is now sorted alphabetically. (5061
,5065
) - Updated references to Scrapinghub, which is now called Zyte. (
4973
,5072
) - Added a mention to contributors in the README. (
4956
) - Reduced the top margin of lists. (
4974
)
- Made Python 3.9 support official (
4757
,4759
) - Extended typing hints (
4895
) - Fixed deprecated uses of the Twisted API. (
4940
,4950
,5073
) - Made our tests run with the new pip resolver. (
4710
,4814
) - Added tests to ensure that
coroutine support <coroutine-support>
is tested. (4987
) - Migrated from Travis CI to GitHub Actions. (
4924
) - Fixed CI issues. (
4986
,5020
,5022
,5027
,5052
,5053
) - Implemented code refactorings, style fixes and cleanups. (
4911
,4982
,5001
,5002
,5076
)
- Fixed
feed exports <topics-feed-exports>
overwrite support (4845
,4857
,4859
) - Fixed the AsyncIO event loop handling, which could make code hang (
4855
,4872
) - Fixed the IPv6-capable DNS resolver
~scrapy.resolver.CachingHostnameResolver
for download handlers that callreactor.resolve <twisted.internet.interfaces.IReactorCore.resolve>
(4802
,4803
) - Fixed the output of the
genspider
command showing placeholders instead of the import path of the generated spider module (4874
) - Migrated Windows CI from Azure Pipelines to GitHub Actions (
4869
,4876
)
Highlights:
- Python 3.5 support has been dropped.
The
file_path
method ofmedia pipelines <topics-media-pipeline>
can now access the sourceitem <topics-items>
.This allows you to set a download file path based on item data.
- The new
item_export_kwargs
key of theFEEDS
setting allows to define keyword parameters to pass toitem exporter classes <topics-exporters>
You can now choose whether
feed exports <topics-feed-exports>
overwrite or append to the output file.For example, when using the
crawl
orrunspider
commands, you can use the-O
option instead of-o
to overwrite the output file.- Zstd-compressed responses are now supported if zstandard is installed.
- In settings, where the import path of a class is required, it is now possible to pass a class object instead.
Python 3.6 or greater is now required; support for Python 3.5 has been dropped
As a result:
- When using PyPy, PyPy 7.2.0 or greater
is now required <faq-python-versions>
- For Amazon S3 storage support in
feed exports <topics-feed-storage-s3>
ormedia pipelines <media-pipelines-s3>
, botocore 1.4.87 or greater is now required - To use the
images pipeline <images-pipeline>
, Pillow 4.0.0 or greater is now required
(
4718
,4732
,4733
,4742
,4743
,4764
)- When using PyPy, PyPy 7.2.0 or greater
~scrapy.downloadermiddlewares.cookies.CookiesMiddleware
once again discards cookies defined inRequest.headers <scrapy.http.Request.headers>
.We decided to revert this bug fix, introduced in Scrapy 2.2.0, because it was reported that the current implementation could break existing code.
If you need to set cookies for a request, use the
Request.cookies <scrapy.http.Request>
parameter.A future version of Scrapy will include a new, better implementation of the reverted bug fix.
(
4717
,4823
)
scrapy.extensions.feedexport.S3FeedStorage
no longer reads the values ofaccess_key
andsecret_key
from the running project settings when they are not passed to its__init__
method; you must either pass those parameters to its__init__
method or useS3FeedStorage.from_crawler <scrapy.extensions.feedexport.S3FeedStorage.from_crawler>
(4356
,4411
,4688
)Rule.process_request <scrapy.spiders.crawl.Rule.process_request>
no longer admits callables which expect a singlerequest
parameter, rather than bothrequest
andresponse
(4818
)
- In custom
media pipelines <topics-media-pipeline>
, signatures that do not accept a keyword-onlyitem
parameter in any of the methods thatnow support this parameter <media-pipeline-item-parameter>
are now deprecated (4628
,4686
) - In custom
feed storage backend classes <topics-feed-storage>
,__init__
method signatures that do not accept a keyword-onlyfeed_options
parameter are now deprecated (547
,716
,4512
) - The
scrapy.utils.python.WeakKeyCache
class is now deprecated (4684
,4701
) - The
scrapy.utils.boto.is_botocore
function is now deprecated, usescrapy.utils.boto.is_botocore_available
instead (4734
,4776
)
The following methods of
media pipelines <topics-media-pipeline>
now accept anitem
keyword-only parameter containing the sourceitem <topics-items>
:- In
scrapy.pipelines.files.FilesPipeline
:~scrapy.pipelines.files.FilesPipeline.file_downloaded
~scrapy.pipelines.files.FilesPipeline.file_path
~scrapy.pipelines.files.FilesPipeline.media_downloaded
~scrapy.pipelines.files.FilesPipeline.media_to_download
- In
scrapy.pipelines.images.ImagesPipeline
:~scrapy.pipelines.images.ImagesPipeline.file_downloaded
~scrapy.pipelines.images.ImagesPipeline.file_path
~scrapy.pipelines.images.ImagesPipeline.get_images
~scrapy.pipelines.images.ImagesPipeline.image_downloaded
~scrapy.pipelines.images.ImagesPipeline.media_downloaded
~scrapy.pipelines.images.ImagesPipeline.media_to_download
(
4628
,4686
)- In
- The new
item_export_kwargs
key of theFEEDS
setting allows to define keyword parameters to pass toitem exporter classes <topics-exporters>
(4606
,4768
) Feed exports <topics-feed-exports>
gained overwrite support:- When using the
crawl
orrunspider
commands, you can use the-O
option instead of-o
to overwrite the output file - You can use the
overwrite
key in theFEEDS
setting to configure whether to overwrite the output file (True
) or append to its content (False
) - The
__init__
andfrom_crawler
methods offeed storage backend classes <topics-feed-storage>
now receive a new keyword-only parameter,feed_options
, which is a dictionary offeed options <feed-options>
(
547
,716
,4512
)- When using the
- Zstd-compressed responses are now supported if zstandard is installed (
4831
) In settings, where the import path of a class is required, it is now possible to pass a class object instead (
3870
,3873
).This includes also settings where only part of its value is made of an import path, such as
DOWNLOADER_MIDDLEWARES
orDOWNLOAD_HANDLERS
.Downloader middlewares <topics-downloader-middleware>
can now overrideresponse.request <scrapy.http.Response.request>
.If a
downloader middleware <topics-downloader-middleware>
returns a~scrapy.http.Response
object from~scrapy.downloadermiddlewares.DownloaderMiddleware.process_response
or~scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception
with a custom~scrapy.http.Request
object assigned toresponse.request <scrapy.http.Response.request>
:- The response is handled by the callback of that custom
~scrapy.http.Request
object, instead of being handled by the callback of the original~scrapy.http.Request
object - That custom
~scrapy.http.Request
object is now sent as therequest
argument to theresponse_received
signal, instead of the original~scrapy.http.Request
object
(
4529
,4632
)- The response is handled by the callback of that custom
When using the
FTP feed storage backend <topics-feed-storage-ftp>
:- It is now possible to set the new
overwrite
feed option <feed-options>
toFalse
to append to an existing file instead of overwriting it - The FTP password can now be omitted if it is not necessary
(
547
,716
,4512
)- It is now possible to set the new
- The
__init__
method of~scrapy.exporters.CsvItemExporter
now supports anerrors
parameter to indicate how to handle encoding errors (4755
) - When
using asyncio <using-asyncio>
, it is now possible toset a custom asyncio loop <using-custom-loops>
(4306
,4414
) - Serialized requests (see
topics-jobs
) now support callbacks that are spider methods that delegate on other callable (4756
) - When a response is larger than
DOWNLOAD_MAXSIZE
, the logged message is now a warning, instead of an error (3874
,3886
,4752
)
- The
genspider
command no longer overwrites existing files unless the--force
option is used (4561
,4616
,4623
) - Cookies with an empty value are no longer considered invalid cookies (
4772
) - The
runspider
command now supports files with the.pyw
file extension (4643
,4646
) - The
~scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware
middleware now simply ignores unsupported proxy values (3331
,4778
) - Checks for generator callbacks with a
return
statement no longer warn aboutreturn
statements in nested functions (4720
,4721
) - The system file mode creation mask no longer affects the permissions of files generated using the
startproject
command (4722
) scrapy.utils.iterators.xmliter
now supports namespaced node names (861
,4746
)~scrapy.Request
objects can now haveabout:
URLs, which can work when using a headless browser (4835
)
- The
FEED_URI_PARAMS
setting is now documented (4671
,4724
) - Improved the documentation of
link extractors <topics-link-extractors>
with an usage example from a spider callback and reference documentation for the~scrapy.link.Link
class (4751
,4775
) - Clarified the impact of
CONCURRENT_REQUESTS
when using the~scrapy.extensions.closespider.CloseSpider
extension (4836
) - Removed references to Python 2’s
unicode
type (4547
,4703
) - We now have an
official deprecation policy <deprecation-policy>
(4705
) - Our
documentation policies <documentation-policies>
now cover usage of Sphinx’s :rstversionadded
and :rstversionchanged
directives, and we have removed usages referencing Scrapy 1.4.0 and earlier versions (3971
,4310
) - Other documentation cleanups (
4090
,4782
,4800
,4801
,4809
,4816
,4825
)
- Extended typing hints (
4243
,4691
) - Added tests for the
check
command (4663
) - Fixed test failures on Debian (
4726
,4727
,4735
) - Improved Windows test coverage (
4723
) - Switched to
formatted string literals <f-strings>
where possible (4307
,4324
,4672
) - Modernized
super
usage (4707
) - Other code and test cleanups (
1790
,3288
,4165
,4564
,4651
,4714
,4738
,4745
,4747
,4761
,4765
,4804
,4817
,4820
,4822
,4839
)
Highlights:
Feed exports <topics-feed-exports>
now supportGoogle Cloud Storage <topics-feed-storage-gcs>
as a storage backendThe new
FEED_EXPORT_BATCH_ITEM_COUNT
setting allows to deliver output items in batches of up to the specified number of items.It also serves as a workaround for
delayed file delivery <delayed-file-delivery>
, which causes Scrapy to only start item delivery after the crawl has finished when using certain storage backends (S3 <topics-feed-storage-s3>
,FTP <topics-feed-storage-ftp>
, and nowGCS <topics-feed-storage-gcs>
).- The base implementation of
item loaders <topics-loaders>
has been moved into a separate library,itemloaders <itemloaders:index>
, allowing usage from outside Scrapy and a separate release schedule
Removed the following classes and their parent modules from
scrapy.linkextractors
:htmlparser.HtmlParserLinkExtractor
regex.RegexLinkExtractor
sgml.BaseSgmlLinkExtractor
sgml.SgmlLinkExtractor
Use
LinkExtractor <scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor>
instead (4356
,4679
)
- The
scrapy.utils.python.retry_on_eintr
function is now deprecated (4683
)
Feed exports <topics-feed-exports>
supportGoogle Cloud Storage <topics-feed-storage-gcs>
(685
,3608
)- New
FEED_EXPORT_BATCH_ITEM_COUNT
setting for batch deliveries (4250
,4434
) - The
parse
command now allows specifying an output file (4317
,4377
) Request.from_curl <scrapy.http.Request.from_curl>
and~scrapy.utils.curl.curl_to_request_kwargs
now also support--data-raw
(4612
)- A
parse
callback may now be used in built-in spider subclasses, such as~scrapy.spiders.CrawlSpider
(712
,732
,781
,4254
)
- Fixed the
CSV exporting <topics-feed-format-csv>
ofdataclass items <dataclass-items>
andattr.s items <attrs-items>
(4667
,4668
) Request.from_curl <scrapy.http.Request.from_curl>
and~scrapy.utils.curl.curl_to_request_kwargs
now set the request method toPOST
when a request body is specified and no request method is specified (4612
)- The processing of ANSI escape sequences in enabled in Windows 10.0.14393 and later, where it is required for colored output (
4393
,4403
)
- Updated the OpenSSL cipher list format link in the documentation about the
DOWNLOADER_CLIENT_TLS_CIPHERS
setting (4653
) - Simplified the code example in
topics-loaders-dataclass
(4652
)
- The base implementation of
item loaders <topics-loaders>
has been moved intoitemloaders <itemloaders:index>
(4005
,4516
) - Fixed a silenced error in some scheduler tests (
4644
,4645
) - Renewed the localhost certificate used for SSL tests (
4650
) - Removed cookie-handling code specific to Python 2 (
4682
) - Stopped using Python 2 unicode literal syntax (
4704
) - Stopped using a backlash for line continuation (
4673
) - Removed unneeded entries from the MyPy exception list (
4690
) - Automated tests now pass on Windows as part of our continuous integration system (
4458
) - Automated tests now pass on the latest PyPy version for supported Python versions in our continuous integration system (
4504
)
- The
startproject
command no longer makes unintended changes to the permissions of files in the destination folder, such as removing execution permissions (4662
,4666
)
Highlights:
- Python 3.5.2+ is required now
dataclass objects <dataclass-items>
andattrs objects <attrs-items>
are now validitem types <item-types>
- New
TextResponse.json <scrapy.http.TextResponse.json>
method - New
bytes_received
signal that allows canceling response download ~scrapy.downloadermiddlewares.cookies.CookiesMiddleware
fixes
- Support for Python 3.5.0 and 3.5.1 has been dropped; Scrapy now refuses to run with a Python version lower than 3.5.2, which introduced
typing.Type
(4615
)
TextResponse.body_as_unicode <scrapy.http.TextResponse.body_as_unicode>
is now deprecated, useTextResponse.text <scrapy.http.TextResponse.text>
instead (4546
,4555
,4579
)scrapy.item.BaseItem
is now deprecated, usescrapy.item.Item
instead (4534
)
dataclass objects <dataclass-items>
andattrs objects <attrs-items>
are now validitem types <item-types>
, and a new itemadapter library makes it easy to write code thatsupports any item type <supporting-item-types>
(2749
,2807
,3761
,3881
,4642
)- A new
TextResponse.json <scrapy.http.TextResponse.json>
method allows to deserialize JSON responses (2444
,4460
,4574
) - A new
bytes_received
signal allows monitoring response download progress andstopping downloads <topics-stop-response-download>
(4205
,4559
) - The dictionaries in the result list of a
media pipeline <topics-media-pipeline>
now include a new key,status
, which indicates if the file was downloaded or, if the file was not downloaded, why it was not downloaded; seeFilesPipeline.get_media_requests <scrapy.pipelines.files.FilesPipeline.get_media_requests>
for more information (2893
,4486
) - When using
Google Cloud Storage <media-pipeline-gcs>
for amedia pipeline <topics-media-pipeline>
, a warning is now logged if the configured credentials do not grant the required permissions (4346
,4508
) Link extractors <topics-link-extractors>
are now serializable, as long as you do not uselambdas <lambda>
for parameters; for example, you can now pass link extractors inRequest.cb_kwargs <scrapy.http.Request.cb_kwargs>
orRequest.meta <scrapy.http.Request.meta>
whenpersisting scheduled requests <topics-jobs>
(4554
)- Upgraded the
pickle protocol <pickle-protocols>
that Scrapy uses from protocol 2 to protocol 4, improving serialization capabilities and performance (4135
,4541
) scrapy.utils.misc.create_instance
now raises aTypeError
exception if the resulting instance isNone
(4528
,4532
)
~scrapy.downloadermiddlewares.cookies.CookiesMiddleware
no longer discards cookies defined inRequest.headers <scrapy.http.Request.headers>
(1992
,2400
)~scrapy.downloadermiddlewares.cookies.CookiesMiddleware
no longer re-encodes cookies defined asbytes
in thecookies
parameter of the__init__
method of~scrapy.http.Request
(2400
,3575
)- When
FEEDS
defines multiple URIs,FEED_STORE_EMPTY
isFalse
and the crawl yields no items, Scrapy no longer stops feed exports after the first URI (4621
,4626
) ~scrapy.spiders.Spider
callbacks defined usingcoroutine syntax <topics/coroutines>
no longer need to return an iterable, and may instead return a~scrapy.http.Request
object, anitem <topics-items>
, orNone
(4609
)- The
startproject
command now ensures that the generated project folders and files have the right permissions (4604
) - Fix a
KeyError
exception being sometimes raised fromscrapy.utils.datatypes.LocalWeakReferencedCache
(4597
,4599
) - When
FEEDS
defines multiple URIs, log messages about items being stored now contain information from the corresponding feed, instead of always containing information about only one of the feeds (4619
,4629
)
- Added a new section about
accessing cb_kwargs from errbacks <errback-cb_kwargs>
(4598
,4634
) - Covered chompjs in
topics-parsing-javascript
(4556
,4562
) - Removed from
topics/coroutines
the warning about the API being experimental (4511
,4513
) - Removed references to unsupported versions of
Twisted <twisted:index>
(4533
) - Updated the description of the
screenshot pipeline example <ScreenshotPipeline>
, which now usescoroutine syntax <topics/coroutines>
instead of returning a~twisted.internet.defer.Deferred
(4514
,4593
) - Removed a misleading import line from the
scrapy.utils.log.configure_logging
code example (4510
,4587
) - The display-on-hover behavior of internal documentation references now also covers links to
commands <topics-commands>
,Request.meta <scrapy.http.Request.meta>
keys,settings <topics-settings>
andsignals <topics-signals>
(4495
,4563
) - It is again possible to download the documentation for offline reading (
4578
,4585
) - Removed backslashes preceding
*args
and**kwargs
in some function and method signatures (4592
,4596
)
- Adjusted the code base further to our
style guidelines <coding-style>
(4237
,4525
,4538
,4539
,4540
,4542
,4543
,4544
,4545
,4557
,4558
,4566
,4568
,4572
) - Removed remnants of Python 2 support (
4550
,4553
,4568
) - Improved code sharing between the
crawl
andrunspider
commands (4548
,4552
) - Replaced
chain(*iterable)
withchain.from_iterable(iterable)
(4635
) - You may now run the
asyncio
tests with Tox on any Python version (4521
) - Updated test requirements to reflect an incompatibility with pytest 5.4 and 5.4.1 (
4588
) - Improved
~scrapy.spiderloader.SpiderLoader
test coverage for scenarios involving duplicate spider names (4549
,4560
) - Configured Travis CI to also run the tests with Python 3.5.2 (
4518
,4615
) - Added a Pylint job to Travis CI (
3727
) - Added a Mypy job to Travis CI (
4637
) - Made use of set literals in tests (
4573
) - Cleaned up the Travis CI configuration (
4517
,4519
,4522
,4537
)
Highlights:
- New
FEEDS
setting to export to multiple feeds - New
Response.ip_address <scrapy.http.Response.ip_address>
attribute
AssertionError
exceptions triggered byassert <assert>
statements have been replaced by new exception types, to support running Python in optimized mode (see-O
) without changing Scrapy’s behavior in any unexpected ways.If you catch an
AssertionError
exception from Scrapy, update your code to catch the corresponding new exception.(
4440
)
- The
LOG_UNSERIALIZABLE_REQUESTS
setting is no longer supported, useSCHEDULER_DEBUG
instead (4385
) - The
REDIRECT_MAX_METAREFRESH_DELAY
setting is no longer supported, useMETAREFRESH_MAXDELAY
instead (4385
) - The
~scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware
middleware has been removed, including the entirescrapy.downloadermiddlewares.chunked
module; chunked transfers work out of the box (4431
) - The
spiders
property has been removed from~scrapy.crawler.Crawler
, useCrawlerRunner.spider_loader <scrapy.crawler.CrawlerRunner.spider_loader>
or instantiateSPIDER_LOADER_CLASS
with your settings instead (4398
) - The
MultiValueDict
,MultiValueDictKeyError
, andSiteNode
classes have been removed fromscrapy.utils.datatypes
(4400
)
- The
FEED_FORMAT
andFEED_URI
settings have been deprecated in favor of the newFEEDS
setting (1336
,3858
,4507
)
- A new setting,
FEEDS
, allows configuring multiple output feeds with different settings each (1336
,3858
,4507
) - The
crawl
andrunspider
commands now support multiple-o
parameters (1336
,3858
,4507
) - The
crawl
andrunspider
commands now support specifying an output format by appending:<format>
to the output file (1336
,3858
,4507
) - The new
Response.ip_address <scrapy.http.Response.ip_address>
attribute gives access to the IP address that originated a response (3903
,3940
) - A warning is now issued when a value in
~scrapy.spiders.Spider.allowed_domains
includes a port (50
,3198
,4413
) - Zsh completion now excludes used option aliases from the completion list (
4438
)
Request serialization <request-serialization>
no longer breaks for callbacks that are spider attributes which are assigned a function with a different name (4500
)None
values in~scrapy.spiders.Spider.allowed_domains
no longer cause aTypeError
exception (4410
)- Zsh completion no longer allows options after arguments (
4438
) - zope.interface 5.0.0 and later versions are now supported (
4447
,4448
) Spider.make_requests_from_url
, deprecated in Scrapy 1.4.0, now issues a warning when used (4412
)
- Improved the documentation about signals that allow their handlers to return a
~twisted.internet.defer.Deferred
(4295
,4390
) - Our PyPI entry now includes links for our documentation, our source code repository and our issue tracker (
4456
) - Covered the curl2scrapy service in the documentation (
4206
,4455
) - Removed references to the Guppy library, which only works in Python 2 (
4285
,4343
) - Extended use of InterSphinx to link to Python 3 documentation (
4444
,4445
) - Added support for Sphinx 3.0 and later (
4475
,4480
,4496
,4503
)
- Removed warnings about using old, removed settings (
4404
) - Removed a warning about importing
~twisted.internet.testing.StringTransport
fromtwisted.test.proto_helpers
in Twisted 19.7.0 or newer (4409
) - Removed outdated Debian package build files (
4384
) - Removed
object
usage as a base class (4430
) - Removed code that added support for old versions of Twisted that we no longer support (
4472
) - Fixed code style issues (
4468
,4469
,4471
,4481
) - Removed
twisted.internet.defer.returnValue
calls (4443
,4446
,4489
)
Response.follow_all <scrapy.http.Response.follow_all>
now supports an empty URL iterable as input (4408
,4420
)- Removed top-level
~twisted.internet.reactor
imports to prevent errors about the wrong Twisted reactor being installed when setting a different Twisted reactor usingTWISTED_REACTOR
(4401
,4406
) - Fixed tests (
4422
)
Highlights:
- Python 2 support has been removed
Partial <topics/coroutines>
coroutine syntax <async>
support andexperimental <topics/asyncio>
asyncio
support- New
Response.follow_all <scrapy.http.Response.follow_all>
method FTP support <media-pipeline-ftp>
for media pipelines- New
Response.certificate <scrapy.http.Response.certificate>
attribute - IPv6 support through
DNS_RESOLVER
- Python 2 support has been removed, following Python 2 end-of-life on January 1, 2020 (
4091
,4114
,4115
,4121
,4138
,4231
,4242
,4304
,4309
,4373
) - Retry gaveups (see
RETRY_TIMES
) are now logged as errors instead of as debug information (3171
,3566
) - File extensions that
LinkExtractor <scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor>
ignores by default now also include7z
,7zip
,apk
,bz2
,cdr
,dmg
,ico
,iso
,tar
,tar.gz
,webm
, andxz
(1837
,2067
,4066
) - The
METAREFRESH_IGNORE_TAGS
setting is now an empty list by default, following web browser behavior (3844
,4311
) - The
~scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware
now includes spaces after commas in the value of theAccept-Encoding
header that it sets, following web browser behavior (4293
) The
__init__
method of custom download handlers (seeDOWNLOAD_HANDLERS
) or subclasses of the following downloader handlers no longer receives asettings
parameter:scrapy.core.downloader.handlers.datauri.DataURIDownloadHandler
scrapy.core.downloader.handlers.file.FileDownloadHandler
Use the
from_settings
orfrom_crawler
class methods to expose such a parameter to your custom download handlers.(
4126
)- We have refactored the
scrapy.core.scheduler.Scheduler
class and related queue classes (seeSCHEDULER_PRIORITY_QUEUE
,SCHEDULER_DISK_QUEUE
andSCHEDULER_MEMORY_QUEUE
) to make it easier to implement custom scheduler queue classes. See2-0-0-scheduler-queue-changes
below for details. - Overridden settings are now logged in a different format. This is more in line with similar information logged at startup (
4199
)
- The
Scrapy shell <topics-shell>
no longer provides a sel proxy object, useresponse.selector <scrapy.http.Response.selector>
instead (4347
) - LevelDB support has been removed (
4112
) - The following functions have been removed from
scrapy.utils.python
:isbinarytext
,is_writable
,setattr_default
,stringify_dict
(4362
)
- Using environment variables prefixed with
SCRAPY_
to override settings is deprecated (4300
,4374
,4375
) scrapy.linkextractors.FilteringLinkExtractor
is deprecated, usescrapy.linkextractors.LinkExtractor <scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor>
instead (4045
)- The
noconnect
query string argument of proxy URLs is deprecated and should be removed from proxy URLs (4198
) - The
next <scrapy.utils.python.MutableChain.next>
method ofscrapy.utils.python.MutableChain
is deprecated, use the globalnext
function orMutableChain.__next__ <scrapy.utils.python.MutableChain.__next__>
instead (4153
)
- Added
partial support <topics/coroutines>
for Python’scoroutine syntax <async>
andexperimental support <topics/asyncio>
forasyncio
andasyncio
-powered libraries (4010
,4259
,4269
,4270
,4271
,4316
,4318
) - The new
Response.follow_all <scrapy.http.Response.follow_all>
method offers the same functionality asResponse.follow <scrapy.http.Response.follow>
but supports an iterable of URLs as input and returns an iterable of requests (2582
,4057
,4286
) Media pipelines <topics-media-pipeline>
now supportFTP storage <media-pipeline-ftp>
(3928
,3961
)- The new
Response.certificate <scrapy.http.Response.certificate>
attribute exposes the SSL certificate of the server as atwisted.internet.ssl.Certificate
object for HTTPS responses (2726
,4054
) - A new
DNS_RESOLVER
setting allows enabling IPv6 support (1031
,4227
) - A new
SCRAPER_SLOT_MAX_ACTIVE_SIZE
setting allows configuring the existing soft limit that pauses request downloads when the total response data being processed is too high (1410
,3551
) - A new
TWISTED_REACTOR
setting allows customizing the~twisted.internet.reactor
that Scrapy uses, allowing toenable asyncio support <topics/asyncio>
or deal with acommon macOS issue <faq-specific-reactor>
(2905
,4294
) - Scheduler disk and memory queues may now use the class methods
from_crawler
orfrom_settings
(3884
) - The new
Response.cb_kwargs <scrapy.http.Response.cb_kwargs>
attribute serves as a shortcut forResponse.request.cb_kwargs <scrapy.http.Request.cb_kwargs>
(4331
) Response.follow <scrapy.http.Response.follow>
now supports aflags
parameter, for consistency with~scrapy.http.Request
(4277
,4279
)Item loader processors <topics-loaders-processors>
can now be regular functions, they no longer need to be methods (3899
)~scrapy.spiders.Rule
now accepts anerrback
parameter (4000
)~scrapy.http.Request
no longer requires acallback
parameter when anerrback
parameter is specified (3586
,4008
)~scrapy.logformatter.LogFormatter
now supports some additional methods:~scrapy.logformatter.LogFormatter.download_error
for download errors~scrapy.logformatter.LogFormatter.item_error
for exceptions raised during item processing byitem pipelines <topics-item-pipeline>
~scrapy.logformatter.LogFormatter.spider_error
for exceptions raised fromspider callbacks <topics-spiders>
(
374
,3986
,3989
,4176
,4188
)- The
FEED_URI
setting now supportspathlib.Path
values (3731
,4074
) - A new
request_left_downloader
signal is sent when a request leaves the downloader (4303
) - Scrapy logs a warning when it detects a request callback or errback that uses
yield
but also returns a value, since the returned value would be lost (3484
,3869
) ~scrapy.spiders.Spider
objects now raise anAttributeError
exception if they do not have a~scrapy.spiders.Spider.start_urls
attribute nor reimplement~scrapy.spiders.Spider.start_requests
, but have astart_url
attribute (4133
,4170
)~scrapy.exporters.BaseItemExporter
subclasses may now usesuper().__init__(**kwargs)
instead ofself._configure(kwargs)
in their__init__
method, passingdont_fail=True
to the parent__init__
method if needed, and accessingkwargs
atself._kwargs
after calling their parent__init__
method (4193
,4370
)- A new
keep_fragments
parameter ofscrapy.utils.request.request_fingerprint
allows to generate different fingerprints for requests with different fragments in their URL (4104
) - Download handlers (see
DOWNLOAD_HANDLERS
) may now use thefrom_settings
andfrom_crawler
class methods that other Scrapy components already supported (4126
) scrapy.utils.python.MutableChain.__iter__
now returnsself
, allowing it to be used as a sequence (4153
)
- The
crawl
command now also exits with exit code 1 when an exception happens before the crawling starts (4175
,4207
) LinkExtractor.extract_links <scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor.extract_links>
no longer re-encodes the query string or URLs from non-UTF-8 responses in UTF-8 (998
,1403
,1949
,4321
)- The first spider middleware (see
SPIDER_MIDDLEWARES
) now also processes exceptions raised from callbacks that are generators (4260
,4272
) - Redirects to URLs starting with 3 slashes (
///
) are now supported (4032
,4042
) ~scrapy.http.Request
no longer accepts strings asurl
simply because they have a colon (2552
,4094
)- The correct encoding is now used for attach names in
~scrapy.mail.MailSender
(4229
,4239
) ~scrapy.dupefilters.RFPDupeFilter
, the defaultDUPEFILTER_CLASS
, no longer writes an extra\r
character on each line in Windows, which made the size of therequests.seen
file unnecessarily large on that platform (4283
)- Z shell auto-completion now looks for
.html
files, not.http
files, and covers the-h
command-line switch (4122
,4291
) - Adding items to a
scrapy.utils.datatypes.LocalCache
object without alimit
defined no longer raises aTypeError
exception (4123
) - Fixed a typo in the message of the
ValueError
exception raised whenscrapy.utils.misc.create_instance
gets bothsettings
andcrawler
set toNone
(4128
)
- API documentation now links to an online, syntax-highlighted view of the corresponding source code (
4148
) - Links to unexisting documentation pages now allow access to the sidebar (
4152
,4169
) - Cross-references within our documentation now display a tooltip when hovered (
4173
,4183
) - Improved the documentation about
LinkExtractor.extract_links <scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor.extract_links>
and simplifiedtopics-link-extractors
(4045
) - Clarified how
ItemLoader.item <scrapy.loader.ItemLoader.item>
works (3574
,4099
) - Clarified that
logging.basicConfig
should not be used when also using~scrapy.crawler.CrawlerProcess
(2149
,2352
,3146
,3960
) - Clarified the requirements for
~scrapy.http.Request
objectswhen using persistence <request-serialization>
(4124
,4139
) - Clarified how to install a
custom image pipeline <media-pipeline-example>
(4034
,4252
) - Fixed the signatures of the
file_path
method inmedia pipeline <topics-media-pipeline>
examples (4290
) - Covered a backward-incompatible change in Scrapy 1.7.0 affecting custom
scrapy.core.scheduler.Scheduler
subclasses (4274
) - Improved the
README.rst
andCODE_OF_CONDUCT.md
files (4059
) - Documentation examples are now checked as part of our test suite and we have fixed some of the issues detected (
4142
,4146
,4171
,4184
,4190
) - Fixed logic issues, broken links and typos (
4247
,4258
,4282
,4288
,4305
,4308
,4323
,4338
,4359
,4361
) - Improved consistency when referring to the
__init__
method of an object (4086
,4088
) - Fixed an inconsistency between code and output in
intro-overview
(4213
) - Extended
~sphinx.ext.intersphinx
usage (4147
,4172
,4185
,4194
,4197
) - We now use a recent version of Python to build the documentation (
4140
,4249
) - Cleaned up documentation (
4143
,4275
)
- Re-enabled proxy
CONNECT
tests (2545
,4114
) - Added Bandit security checks to our test suite (
4162
,4181
) - Added Flake8 style checks to our test suite and applied many of the corresponding changes (
3944
,3945
,4137
,4157
,4167
,4174
,4186
,4195
,4238
,4246
,4355
,4360
,4365
) - Improved test coverage (
4097
,4218
,4236
) - Started reporting slowest tests, and improved the performance of some of them (
4163
,4164
) - Fixed broken tests and refactored some tests (
4014
,4095
,4244
,4268
,4372
) - Modified the
tox <tox:index>
configuration to allow running tests with any Python version, run Bandit and Flake8 tests by default, and enforce a minimum tox version programmatically (4179
) - Cleaned up code (
3937
,4208
,4209
,4210
,4212
,4369
,4376
,4378
)
The following changes may impact any custom queue classes of all types:
- The
push
method no longer receives a second positional parameter containingrequest.priority * -1
. If you need that value, get it from the first positional parameter,request
, instead, or use the new~scrapy.core.scheduler.ScrapyPriorityQueue.priority
method inscrapy.core.scheduler.ScrapyPriorityQueue
subclasses.
The following changes may impact custom priority queue classes:
- In the
__init__
method or thefrom_crawler
orfrom_settings
class methods:- The parameter that used to contain a factory function,
qfactory
, is now passed as a keyword parameter nameddownstream_queue_cls
. - A new keyword parameter has been added:
key
. It is a string that is always an empty string for memory queues and indicates theJOB_DIR
value for disk queues. - The parameter for disk queues that contains data from the previous crawl,
startprios
orslot_startprios
, is now passed as a keyword parameter namedstartprios
. - The
serialize
parameter is no longer passed. The disk queue class must take care of request serialization on its own before writing to disk, using the~scrapy.utils.reqser.request_to_dict
and~scrapy.utils.reqser.request_from_dict
functions from thescrapy.utils.reqser
module.
- The parameter that used to contain a factory function,
The following changes may impact custom disk and memory queue classes:
- The signature of the
__init__
method is now__init__(self, crawler, key)
.
The following changes affect specifically the ~scrapy.core.scheduler.ScrapyPriorityQueue
and ~scrapy.core.scheduler.DownloaderAwarePriorityQueue
classes from scrapy.core.scheduler
and may affect subclasses:
In the
__init__
method, most of the changes described above apply.__init__
may still receive all parameters as positional parameters, however:downstream_queue_cls
, which replacedqfactory
, must be instantiated differently.qfactory
was instantiated with a priority value (integer).Instances of
downstream_queue_cls
should be created using the newScrapyPriorityQueue.qfactory <scrapy.core.scheduler.ScrapyPriorityQueue.qfactory>
orDownloaderAwarePriorityQueue.pqfactory <scrapy.core.scheduler.DownloaderAwarePriorityQueue.pqfactory>
methods.- The new
key
parameter displaced thestartprios
parameter 1 position to the right.
- The following class attributes have been added:
~scrapy.core.scheduler.ScrapyPriorityQueue.crawler
~scrapy.core.scheduler.ScrapyPriorityQueue.downstream_queue_cls
(details above)~scrapy.core.scheduler.ScrapyPriorityQueue.key
(details above)
- The
serialize
attribute has been removed (details above)
The following changes affect specifically the ~scrapy.core.scheduler.ScrapyPriorityQueue
class and may affect subclasses:
A new
~scrapy.core.scheduler.ScrapyPriorityQueue.priority
method has been added which, given a request, returnsrequest.priority * -1
.It is used in
~scrapy.core.scheduler.ScrapyPriorityQueue.push
to make up for the removal of itspriority
parameter.- The
spider
attribute has been removed. Usecrawler.spider <scrapy.core.scheduler.ScrapyPriorityQueue.crawler>
instead.
The following changes affect specifically the ~scrapy.core.scheduler.DownloaderAwarePriorityQueue
class and may affect subclasses:
- A new
~scrapy.core.scheduler.DownloaderAwarePriorityQueue.pqueues
attribute offers a mapping of downloader slot names to the corresponding instances of~scrapy.core.scheduler.DownloaderAwarePriorityQueue.downstream_queue_cls
.
(3884
)
Security bug fixes:
Due to its ReDoS vulnerabilities,
scrapy.utils.iterators.xmliter
is now deprecated in favor of~scrapy.utils.iterators.xmliter_lxml
, which~scrapy.spiders.XMLFeedSpider
now uses.To minimize the impact of this change on existing code,
~scrapy.utils.iterators.xmliter_lxml
now supports indicating the node namespace as a prefix in the node name, and big files with highly nested trees when using libxml2 2.7+.Please, see the cc65-xxvf-f7r9 security advisory for more information.
DOWNLOAD_MAXSIZE
andDOWNLOAD_WARNSIZE
now also apply to the decompressed response body. Please, see the 7j7m-v7m3-jqm7 security advisory for more information.- Also in relation with the 7j7m-v7m3-jqm7 security advisory, use of the
scrapy.downloadermiddlewares.decompression
module is discouraged and will trigger a warning. - The
Authorization
header is now dropped on redirects to a different domain. Please, see the cw9j-q3vf-hrrv security advisory for more information.
Security bug fix:
When
~scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware
processes a request withproxy
metadata, and thatproxy
metadata includes proxy credentials,~scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware
sets theProxy-Authorization
header, but only if that header is not already set.There are third-party proxy-rotation downloader middlewares that set different
proxy
metadata every time they process a request.Because of request retries and redirects, the same request can be processed by downloader middlewares more than once, including both
~scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware
and any third-party proxy-rotation downloader middleware.These third-party proxy-rotation downloader middlewares could change the
proxy
metadata of a request to a new value, but fail to remove theProxy-Authorization
header from the previous value of theproxy
metadata, causing the credentials of one proxy to be sent to a different proxy.To prevent the unintended leaking of proxy credentials, the behavior of
~scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware
is now as follows when processing a request:- If the request being processed defines
proxy
metadata that includes credentials, theProxy-Authorization
header is always updated to feature those credentials. If the request being processed defines
proxy
metadata without credentials, theProxy-Authorization
header is removed unless it was originally defined for the same proxy URL.To remove proxy credentials while keeping the same proxy URL, remove the
Proxy-Authorization
header.If the request has no
proxy
metadata, or that metadata is a falsy value (e.g.None
), theProxy-Authorization
header is removed.It is no longer possible to set a proxy URL through the
proxy
metadata but set the credentials through theProxy-Authorization
header. Set proxy credentials through theproxy
metadata instead.
- If the request being processed defines
Security bug fixes:
When a
~scrapy.http.Request
object with cookies defined gets a redirect response causing a new~scrapy.http.Request
object to be scheduled, the cookies defined in the original~scrapy.http.Request
object are no longer copied into the new~scrapy.http.Request
object.If you manually set the
Cookie
header on a~scrapy.http.Request
object and the domain name of the redirect URL is not an exact match for the domain of the URL of the original~scrapy.http.Request
object, yourCookie
header is now dropped from the new~scrapy.http.Request
object.The old behavior could be exploited by an attacker to gain access to your cookies. Please, see the cjvr-mfj7-j4j8 security advisory for more information.
Note
It is still possible to enable the sharing of cookies between different domains with a shared domain suffix (e.g.
example.com
and any subdomain) by defining the shared domain suffix (e.g.example.com
) as the cookie domain when defining your cookies. See the documentation of the~scrapy.http.Request
class for more information.When the domain of a cookie, either received in the
Set-Cookie
header of a response or defined in a~scrapy.http.Request
object, is set to a public suffix, the cookie is now ignored unless the cookie domain is the same as the request domain.The old behavior could be exploited by an attacker to inject cookies into your requests to some other domains. Please, see the mfjm-vh54-3f96 security advisory for more information.
Security bug fix:
If you use
~scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware
(i.e. thehttp_user
andhttp_pass
spider attributes) for HTTP authentication, any request exposes your credentials to the request target.To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute,
http_auth_domain
, and point it to the specific domain to which the authentication credentials must be sent.If the
http_auth_domain
spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.If you need to send the same HTTP authentication credentials to multiple domains, you can use
w3lib.http.basic_auth_header
instead to set the value of theAuthorization
header of your requests.If you really want your spider to send the same HTTP authentication credentials to any domain, set the
http_auth_domain
spider attribute toNone
.Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.
Highlights:
- Dropped Python 3.4 support and updated minimum requirements; made Python 3.8 support official
- New
Request.from_curl <scrapy.http.Request.from_curl>
class method - New
ROBOTSTXT_PARSER
andROBOTSTXT_USER_AGENT
settings - New
DOWNLOADER_CLIENT_TLS_CIPHERS
andDOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING
settings
Python 3.4 is no longer supported, and some of the minimum requirements of Scrapy have also changed:
cssselect <cssselect:index>
0.9.1- cryptography 2.0
- lxml 3.5.0
- pyOpenSSL 16.2.0
- queuelib 1.4.2
- service_identity 16.0.0
- six 1.10.0
- Twisted 17.9.0 (16.0.0 with Python 2)
- zope.interface 4.1.3
(
3892
)JSONRequest
is now called~scrapy.http.JsonRequest
for consistency with similar classes (3929
,3982
)- If you are using a custom context factory (
DOWNLOADER_CLIENTCONTEXTFACTORY
), its__init__
method must accept two new parameters:tls_verbose_logging
andtls_ciphers
(2111
,3392
,3442
,3450
) ~scrapy.loader.ItemLoader
now turns the values of its input item into lists:>>> item = MyItem() >>> item["field"] = "value1" >>> loader = ItemLoader(item=item) >>> item["field"] ['value1']
This is needed to allow adding values to existing fields (
loader.add_value('field', 'value2')
).(
3804
,3819
,3897
,3976
,3998
,4036
)
See also 1.8-deprecation-removals
below.
- A new
Request.from_curl <scrapy.http.Request.from_curl>
class method allowscreating a request from a cURL command <requests-from-curl>
(2985
,3862
) - A new
ROBOTSTXT_PARSER
setting allows choosing which robots.txt parser to use. It includes built-in support forRobotFileParser <python-robotfileparser>
,Protego <protego-parser>
(default),Reppy <reppy-parser>
, andRobotexclusionrulesparser <rerp-parser>
, and allows you toimplement support for additional parsers <support-for-new-robots-parser>
(754
,2669
,3796
,3935
,3969
,4006
) - A new
ROBOTSTXT_USER_AGENT
setting allows defining a separate user agent string to use for robots.txt parsing (3931
,3966
) ~scrapy.spiders.Rule
no longer requires aLinkExtractor <scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor>
parameter (781
,4016
)- Use the new
DOWNLOADER_CLIENT_TLS_CIPHERS
setting to customize the TLS/SSL ciphers used by the default HTTP/1.1 downloader (3392
,3442
) - Set the new
DOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING
setting toTrue
to enable debug-level messages about TLS connection parameters after establishing HTTPS connections (2111
,3450
) - Callbacks that receive keyword arguments (see
Request.cb_kwargs <scrapy.http.Request.cb_kwargs>
) can now be tested using the new@cb_kwargs <scrapy.contracts.default.CallbackKeywordArgumentsContract>
spider contract <topics-contracts>
(3985
,3988
) - When a
@scrapes <scrapy.contracts.default.ScrapesContract>
spider contract fails, all missing fields are now reported (766
,3939
) Custom log formats <custom-log-formats>
can now drop messages by having the corresponding methods of the configuredLOG_FORMATTER
returnNone
(3984
,3987
)- A much improved completion definition is now available for Zsh (
4069
)
ItemLoader.load_item() <scrapy.loader.ItemLoader.load_item>
no longer makes later calls toItemLoader.get_output_value() <scrapy.loader.ItemLoader.get_output_value>
orItemLoader.load_item() <scrapy.loader.ItemLoader.load_item>
return empty data (3804
,3819
,3897
,3976
,3998
,4036
)- Fixed
~scrapy.statscollectors.DummyStatsCollector
raising aTypeError
exception (4007
,4052
) FilesPipeline.file_path <scrapy.pipelines.files.FilesPipeline.file_path>
andImagesPipeline.file_path <scrapy.pipelines.images.ImagesPipeline.file_path>
no longer choose file extensions that are not registered with IANA (1287
,3953
,3954
)- When using botocore to persist files in S3, all botocore-supported headers are properly mapped now (
3904
,3905
) - FTP passwords in
FEED_URI
containing percent-escaped characters are now properly decoded (3941
) - A memory-handling and error-handling issue in
scrapy.utils.ssl.get_temp_key_info
has been fixed (3920
)
- The documentation now covers how to define and configure a
custom log format <custom-log-formats>
(3616
,3660
) - API documentation added for
~scrapy.exporters.MarshalItemExporter
and~scrapy.exporters.PythonItemExporter
(3973
) - API documentation added for
~scrapy.item.BaseItem
and~scrapy.item.ItemMeta
(3999
) - Minor documentation fixes (
2998
,3398
,3597
,3894
,3934
,3978
,3993
,4022
,4028
,4033
,4046
,4050
,4055
,4056
,4061
,4072
,4071
,4079
,4081
,4089
,4093
)
scrapy.xlib
has been removed (4015
)
- The LevelDB storage backend (
scrapy.extensions.httpcache.LeveldbCacheStorage
) of~scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware
is deprecated (4085
,4092
) - Use of the undocumented
SCRAPY_PICKLED_SETTINGS_TO_OVERRIDE
environment variable is deprecated (3910
) scrapy.item.DictItem
is deprecated, use~scrapy.item.Item
instead (3999
)
Minimum versions of optional Scrapy requirements that are covered by continuous integration tests have been updated:
Lower versions of these optional requirements may work, but it is not guaranteed (
3892
)- GitHub templates for bug reports and feature requests (
3126
,3471
,3749
,3754
) - Continuous integration fixes (
3923
) - Code cleanup (
3391
,3907
,3946
,3950
,4023
,4031
)
Revert the fix for 3804
(3819
), which has a few undesired side effects (3897
, 3976
).
As a result, when an item loader is initialized with an item, ItemLoader.load_item() <scrapy.loader.ItemLoader.load_item>
once again makes later calls to ItemLoader.get_output_value()
<scrapy.loader.ItemLoader.get_output_value>
or ItemLoader.load_item()
<scrapy.loader.ItemLoader.load_item>
return empty data.
Enforce lxml 4.3.5 or lower for Python 3.4 (3912
, 3918
).
Fix Python 2 support (3889
, 3893
, 3896
).
Re-packaging of Scrapy 1.7.0, which was missing some changes in PyPI.
Note
Make sure you install Scrapy 1.7.1. The Scrapy 1.7.0 package in PyPI is the result of an erroneous commit tagging and does not include all the changes described below.
Highlights:
- Improvements for crawls targeting multiple domains
- A cleaner way to pass arguments to callbacks
- A new class for JSON requests
- Improvements for rule-based spiders
- New features for feed exports
429
is now part of theRETRY_HTTP_CODES
setting by defaultThis change is backward incompatible. If you don’t want to retry
429
, you must overrideRETRY_HTTP_CODES
accordingly.~scrapy.crawler.Crawler
,CrawlerRunner.crawl <scrapy.crawler.CrawlerRunner.crawl>
andCrawlerRunner.create_crawler <scrapy.crawler.CrawlerRunner.create_crawler>
no longer accept a~scrapy.spiders.Spider
subclass instance, they only accept a~scrapy.spiders.Spider
subclass now.~scrapy.spiders.Spider
subclass instances were never meant to work, and they were not working as one would expect: instead of using the passed~scrapy.spiders.Spider
subclass instance, their~scrapy.spiders.Spider.from_crawler
method was called to generate a new instance.- Non-default values for the
SCHEDULER_PRIORITY_QUEUE
setting may stop working. Scheduler priority queue classes now need to handle~scrapy.http.Request
objects instead of arbitrary Python data structures. An additional
crawler
parameter has been added to the__init__
method of the~scrapy.core.scheduler.Scheduler
class. Custom scheduler subclasses which don't accept arbitrary parameters in their__init__
method might break because of this change.For more information, see
SCHEDULER
.
See also 1.7-deprecation-removals
below.
- A new scheduler priority queue,
scrapy.pqueues.DownloaderAwarePriorityQueue
, may beenabled <broad-crawls-scheduler-priority-queue>
for a significant scheduling improvement on crawls targeting multiple web domains, at the cost of noCONCURRENT_REQUESTS_PER_IP
support (3520
) - A new
Request.cb_kwargs <scrapy.http.Request.cb_kwargs>
attribute provides a cleaner way to pass keyword arguments to callback methods (1138
,3563
) - A new
JSONRequest <scrapy.http.JsonRequest>
class offers a more convenient way to build JSON requests (3504
,3505
) - A
process_request
callback passed to the~scrapy.spiders.Rule
__init__
method now receives the~scrapy.http.Response
object that originated the request as its second argument (3682
) - A new
restrict_text
parameter for theLinkExtractor <scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor>
__init__
method allows filtering links by linking text (3622
,3635
) - A new
FEED_STORAGE_S3_ACL
setting allows defining a custom ACL for feeds exported to Amazon S3 (3607
) - A new
FEED_STORAGE_FTP_ACTIVE
setting allows using FTP’s active connection mode for feeds exported to FTP servers (3829
) - A new
METAREFRESH_IGNORE_TAGS
setting allows overriding which HTML tags are ignored when searching a response for HTML meta tags that trigger a redirect (1422
,3768
) - A new
redirect_reasons
request meta key exposes the reason (status code, meta refresh) behind every followed redirect (3581
,3687
) - The
SCRAPY_CHECK
variable is now set to thetrue
string during runs of thecheck
command, which allowsdetecting contract check runs from code <detecting-contract-check-runs>
(3704
,3739
) - A new
Item.deepcopy() <scrapy.item.Item.deepcopy>
method makes it easier todeep-copy items <copying-items>
(1493
,3671
) ~scrapy.extensions.corestats.CoreStats
also logselapsed_time_seconds
now (3638
)- Exceptions from
~scrapy.loader.ItemLoader
input and output processors <topics-loaders-processors>
are now more verbose (3836
,3840
) ~scrapy.crawler.Crawler
,CrawlerRunner.crawl <scrapy.crawler.CrawlerRunner.crawl>
andCrawlerRunner.create_crawler <scrapy.crawler.CrawlerRunner.create_crawler>
now fail gracefully if they receive a~scrapy.spiders.Spider
subclass instance instead of the subclass itself (2283
,3610
,3872
)
~scrapy.spidermiddlewares.SpiderMiddleware.process_spider_exception
is now also invoked for generators (220
,2061
)- System exceptions like KeyboardInterrupt are no longer caught (
3726
) ItemLoader.load_item() <scrapy.loader.ItemLoader.load_item>
no longer makes later calls toItemLoader.get_output_value() <scrapy.loader.ItemLoader.get_output_value>
orItemLoader.load_item() <scrapy.loader.ItemLoader.load_item>
return empty data (3804
,3819
)- The images pipeline (
~scrapy.pipelines.images.ImagesPipeline
) no longer ignores these Amazon S3 settings:AWS_ENDPOINT_URL
,AWS_REGION_NAME
,AWS_USE_SSL
,AWS_VERIFY
(3625
) - Fixed a memory leak in
scrapy.pipelines.media.MediaPipeline
affecting, for example, non-200 responses and exceptions from custom middlewares (3813
) - Requests with private callbacks are now correctly unserialized from disk (
3790
) FormRequest.from_response() <scrapy.http.FormRequest.from_response>
now handles invalid methods like major web browsers (3777
,3794
)
- A new topic,
topics-dynamic-content
, covers recommended approaches to read dynamically-loaded data (3703
) topics-broad-crawls
now features information about memory usage (1264
,3866
)- The documentation of
~scrapy.spiders.Rule
now covers how to access the text of a link when using~scrapy.spiders.CrawlSpider
(3711
,3712
) - A new section,
httpcache-storage-custom
, covers writing a custom cache storage backend for~scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware
(3683
,3692
) - A new
FAQ <faq>
entry,faq-split-item
, explains what to do when you want to split an item into multiple items from an item pipeline (2240
,3672
) - Updated the
FAQ entry about crawl order <faq-bfo-dfo>
to explain why the first few requests rarely follow the desired order (1739
,3621
) - The
LOGSTATS_INTERVAL
setting (3730
), theFilesPipeline.file_path <scrapy.pipelines.files.FilesPipeline.file_path>
andImagesPipeline.file_path <scrapy.pipelines.images.ImagesPipeline.file_path>
methods (2253
,3609
) and theCrawler.stop() <scrapy.crawler.Crawler.stop>
method (3842
) are now documented - Some parts of the documentation that were confusing or misleading are now clearer (
1347
,1789
,2289
,3069
,3615
,3626
,3668
,3670
,3673
,3728
,3762
,3861
,3882
) - Minor documentation fixes (
3648
,3649
,3662
,3674
,3676
,3694
,3724
,3764
,3767
,3791
,3797
,3806
,3812
)
The following deprecated APIs have been removed (3578
):
scrapy.conf
(useCrawler.settings <scrapy.crawler.Crawler.settings>
)- From
scrapy.core.downloader.handlers
:http.HttpDownloadHandler
(usehttp10.HTTP10DownloadHandler
)
scrapy.loader.ItemLoader._get_values
(use_get_xpathvalues
)scrapy.loader.XPathItemLoader
(use~scrapy.loader.ItemLoader
)scrapy.log
(seetopics-logging
)- From
scrapy.pipelines
:files.FilesPipeline.file_key
(usefile_path
)images.ImagesPipeline.file_key
(usefile_path
)images.ImagesPipeline.image_key
(usefile_path
)images.ImagesPipeline.thumb_key
(usethumb_path
)
- From both
scrapy.selector
andscrapy.selector.lxmlsel
:HtmlXPathSelector
(use~scrapy.selector.Selector
)XmlXPathSelector
(use~scrapy.selector.Selector
)XPathSelector
(use~scrapy.selector.Selector
)XPathSelectorList
(use~scrapy.selector.Selector
)
- From
scrapy.selector.csstranslator
:ScrapyGenericTranslator
(use parsel.csstranslator.GenericTranslator)ScrapyHTMLTranslator
(use parsel.csstranslator.HTMLTranslator)ScrapyXPathExpr
(use parsel.csstranslator.XPathExpr)
- From
~scrapy.selector.Selector
:_root
(both the__init__
method argument and the object property, useroot
)extract_unquoted
(usegetall
)select
(usexpath
)
- From
~scrapy.selector.SelectorList
:extract_unquoted
(usegetall
)select
(usexpath
)x
(usexpath
)
scrapy.spiders.BaseSpider
(use~scrapy.spiders.Spider
)- From
~scrapy.spiders.Spider
(and subclasses):DOWNLOAD_DELAY
(usedownload_delay <spider-download_delay-attribute>
)set_crawler
(use~scrapy.spiders.Spider.from_crawler
)
scrapy.spiders.spiders
(use~scrapy.spiderloader.SpiderLoader
)scrapy.telnet
(usescrapy.extensions.telnet
)- From
scrapy.utils.python
:str_to_unicode
(useto_unicode
)unicode_to_str
(useto_bytes
)
scrapy.utils.response.body_or_str
The following deprecated settings have also been removed (3578
):
SPIDER_MANAGER_CLASS
(useSPIDER_LOADER_CLASS
)
- The
queuelib.PriorityQueue
value for theSCHEDULER_PRIORITY_QUEUE
setting is deprecated. Usescrapy.pqueues.ScrapyPriorityQueue
instead. process_request
callbacks passed to~scrapy.spiders.Rule
that do not accept two arguments are deprecated.- The following modules are deprecated:
scrapy.utils.http
(use w3lib.http)scrapy.utils.markup
(use w3lib.html)scrapy.utils.multipart
(use urllib3)
- The
scrapy.utils.datatypes.MergeDict
class is deprecated for Python 3 code bases. Use~collections.ChainMap
instead. (3878
) - The
scrapy.utils.gz.is_gzipped
function is deprecated. Usescrapy.utils.gz.gzip_magic_number
instead.
- It is now possible to run all tests from the same tox environment in parallel; the documentation now covers
this and other ways to run tests <running-tests>
(3707
) - It is now possible to generate an API documentation coverage report (
3806
,3810
,3860
) - The
documentation policies <documentation-policies>
now require docstrings (3701
) that follow PEP 257 (3748
) - Internal fixes and cleanup (
3629
,3643
,3684
,3698
,3734
,3735
,3736
,3737
,3809
,3821
,3825
,3827
,3833
,3857
,3877
)
Highlights:
- better Windows support;
- Python 3.7 compatibility;
- big documentation improvements, including a switch from
.extract_first()
+.extract()
API to.get()
+.getall()
API; - feed exports, FilePipeline and MediaPipeline improvements;
- better extensibility:
item_error
andrequest_reached_downloader
signals;from_crawler
support for feed exporters, feed storages and dupefilters. scrapy.contracts
fixes and new features;- telnet console security improvements, first released as a backport in
release-1.5.2
; - clean-up of the deprecated code;
- various bug fixes, small new features and usability improvements across the codebase.
While these are not changes in Scrapy itself, but rather in the parsel library which Scrapy uses for xpath/css selectors, these changes are worth mentioning here. Scrapy now depends on parsel >= 1.5, and Scrapy documentation is updated to follow recent parsel
API conventions.
Most visible change is that .get()
and .getall()
selector methods are now preferred over .extract_first()
and .extract()
. We feel that these new methods result in a more concise and readable code. See old-extraction-api
for more details.
Note
There are currently no plans to deprecate .extract()
and .extract_first()
methods.
Another useful new feature is the introduction of Selector.attrib
and SelectorList.attrib
properties, which make it easier to get attributes of HTML elements. See selecting-attributes
.
CSS selectors are cached in parsel >= 1.5, which makes them faster when the same CSS path is used many times. This is very common in case of Scrapy spiders: callbacks are usually called several times, on different pages.
If you're using custom Selector
or SelectorList
subclasses, a backward incompatible change in parsel may affect your code. See parsel changelog for a detailed description, as well as for the full list of improvements.
Backward incompatible: Scrapy's telnet console now requires username and password. See topics-telnetconsole
for more details. This change fixes a security issue; see release-1.5.2
release notes for details.
from_crawler
support is added to feed exporters and feed storages. This, among other things, allows to access Scrapy settings from custom feed storages and exporters (1605
,3348
).from_crawler
support is added to dupefilters (2956
); this allows to access e.g. settings or a spider from a dupefilter.item_error
is fired when an error happens in a pipeline (3256
);request_reached_downloader
is fired when Downloader gets a new Request; this signal can be useful e.g. for custom Schedulers (3393
).- new SitemapSpider
~.SitemapSpider.sitemap_filter
method which allows to select sitemap entries based on their attributes in SitemapSpider subclasses (3512
). - Lazy loading of Downloader Handlers is now optional; this enables better initialization error handling in custom Downloader Handlers (
3394
).
- Expose more options for S3FilesStore:
AWS_ENDPOINT_URL
,AWS_USE_SSL
,AWS_VERIFY
,AWS_REGION_NAME
. For example, this allows to use alternative or self-hosted AWS-compatible providers (2609
,3548
). - ACL support for Google Cloud Storage:
FILES_STORE_GCS_ACL
andIMAGES_STORE_GCS_ACL
(3199
).
- Exceptions in contracts code are handled better (
3377
); dont_filter=True
is used for contract requests, which allows to test different callbacks with the same URL (3381
);request_cls
attribute in Contract subclasses allow to use different Request classes in contracts, for example FormRequest (3383
).- Fixed errback handling in contracts, e.g. for cases where a contract is executed for URL which returns non-200 response (
3371
).
- more stats for RobotsTxtMiddleware (
3100
) - INFO log level is used to show telnet host/port (
3115
) - a message is added to IgnoreRequest in RobotsTxtMiddleware (
3113
) - better validation of
url
argument inResponse.follow
(3131
) - non-zero exit code is returned from Scrapy commands when error happens on spider initialization (
3226
) - Link extraction improvements: "ftp" is added to scheme list (
3152
); "flv" is added to common video extensions (3165
) - better error message when an exporter is disabled (
3358
); scrapy shell --help
mentions syntax required for local files (./file.html
) -3496
.- Referer header value is added to RFPDupeFilter log messages (
3588
)
- fixed issue with extra blank lines in .csv exports under Windows (
3039
); - proper handling of pickling errors in Python 3 when serializing objects for disk queues (
3082
) - flags are now preserved when copying Requests (
3342
); - FormRequest.from_response clickdata shouldn't ignore elements with
input[type=image]
(3153
). - FormRequest.from_response should preserve duplicate keys (
3247
)
- Docs are re-written to suggest .get/.getall API instead of .extract/.extract_first. Also,
topics-selectors
docs are updated and re-structured to match latest parsel docs; they now contain more topics, such asselecting-attributes
ortopics-selectors-css-extensions
(3390
). topics-developer-tools
is a new tutorial which replaces old Firefox and Firebug tutorials (3400
).- SCRAPY_PROJECT environment variable is documented (
3518
); - troubleshooting section is added to install instructions (
3517
); - improved links to beginner resources in the tutorial (
3367
,3468
); - fixed
RETRY_HTTP_CODES
default values in docs (3335
); - remove unused
DEPTH_STATS
option from docs (3245
); - other cleanups (
3347
,3350
,3445
,3544
,3605
).
Compatibility shims for pre-1.0 Scrapy module names are removed (3318
):
scrapy.command
scrapy.contrib
(with all submodules)scrapy.contrib_exp
(with all submodules)scrapy.dupefilter
scrapy.linkextractor
scrapy.project
scrapy.spider
scrapy.spidermanager
scrapy.squeue
scrapy.stats
scrapy.statscol
scrapy.utils.decorator
See module-relocations
for more information, or use suggestions from Scrapy 1.5.x deprecation warnings to update your code.
Other deprecation removals:
- Deprecated scrapy.interfaces.ISpiderManager is removed; please use scrapy.interfaces.ISpiderLoader.
- Deprecated
CrawlerSettings
class is removed (3327
). - Deprecated
Settings.overrides
andSettings.defaults
attributes are removed (3327
,3359
).
- All Scrapy tests now pass on Windows; Scrapy testing suite is executed in a Windows environment on CI (
3315
). - Python 3.7 support (
3326
,3150
,3547
). - Testing and CI fixes (
3526
,3538
,3308
,3311
,3309
,3305
,3210
,3299
) scrapy.http.cookies.CookieJar.clear
accepts "domain", "path" and "name" optional arguments (3231
).- additional files are included to sdist (
3495
); - code style fixes (
3405
,3304
); - unneeded .strip() call is removed (
3519
); - collections.deque is used to store MiddlewareManager methods instead of a list (
3476
)
Security bugfix: Telnet console extension can be easily exploited by rogue websites POSTing content to http://localhost:6023, we haven't found a way to exploit it from Scrapy, but it is very easy to trick a browser to do so and elevates the risk for local development environment.
The fix is backward incompatible, it enables telnet user-password authentication by default with a random generated password. If you can't upgrade right away, please consider setting
TELNETCONSOLE_PORT
out of its default value.See
telnet console <topics-telnetconsole>
documentation for more info- Backport CI build failure under GCE environment due to boto import error.
This is a maintenance release with important bug fixes, but no new features:
O(N^2)
gzip decompression issue which affected Python 3 and PyPy is fixed (3281
);- skipping of TLS validation errors is improved (
3166
); - Ctrl-C handling is fixed in Python 3.5+ (
3096
); - testing fixes (
3092
,3263
); - documentation improvements (
3058
,3059
,3089
,3123
,3127
,3189
,3224
,3280
,3279
,3201
,3260
,3284
,3298
,3294
).
This release brings small new features and improvements across the codebase. Some highlights:
- Google Cloud Storage is supported in FilesPipeline and ImagesPipeline.
- Crawling with proxy servers becomes more efficient, as connections to proxies can be reused now.
- Warnings, exception and logging messages are improved to make debugging easier.
scrapy parse
command now allows to set custom request meta via--meta
argument.- Compatibility with Python 3.6, PyPy and PyPy3 is improved; PyPy and PyPy3 are now supported officially, by running tests on CI.
- Better default handling of HTTP 308, 522 and 524 status codes.
- Documentation is improved, as usual.
- Scrapy 1.5 drops support for Python 3.3.
- Default Scrapy User-Agent now uses https link to scrapy.org (
2983
). This is technically backward-incompatible; overrideUSER_AGENT
if you relied on old value. - Logging of settings overridden by
custom_settings
is fixed; this is technically backward-incompatible because the logger changes from[scrapy.utils.log]
to[scrapy.crawler]
. If you're parsing Scrapy logs, please update your log parsers (1343
). - LinkExtractor now ignores
m4v
extension by default, this is change in behavior. - 522 and 524 status codes are added to
RETRY_HTTP_CODES
(2851
)
- Support
<link>
tags inResponse.follow
(2785
) - Support for
ptpython
REPL (2654
) - Google Cloud Storage support for FilesPipeline and ImagesPipeline (
2923
). - New
--meta
option of the "scrapy parse" command allows to pass additional request.meta (2883
) - Populate spider variable when using
shell.inspect_response
(2812
) - Handle HTTP 308 Permanent Redirect (
2844
) - Add 522 and 524 to
RETRY_HTTP_CODES
(2851
) - Log versions information at startup (
2857
) scrapy.mail.MailSender
now works in Python 3 (it requires Twisted 17.9.0)- Connections to proxy servers are reused (
2743
) - Add template for a downloader middleware (
2755
) - Explicit message for NotImplementedError when parse callback not defined (
2831
) - CrawlerProcess got an option to disable installation of root log handler (
2921
) - LinkExtractor now ignores
m4v
extension by default - Better log messages for responses over
DOWNLOAD_WARNSIZE
andDOWNLOAD_MAXSIZE
limits (2927
) - Show warning when a URL is put to
Spider.allowed_domains
instead of a domain (2250
).
- Fix logging of settings overridden by
custom_settings
; this is technically backward-incompatible because the logger changes from[scrapy.utils.log]
to[scrapy.crawler]
, so please update your log parsers if needed (1343
) - Default Scrapy User-Agent now uses https link to scrapy.org (
2983
). This is technically backward-incompatible; overrideUSER_AGENT
if you relied on old value. - Fix PyPy and PyPy3 test failures, support them officially (
2793
,2935
,2990
,3050
,2213
,3048
) - Fix DNS resolver when
DNSCACHE_ENABLED=False
(2811
) - Add
cryptography
for Debian Jessie tox test env (2848
) - Add verification to check if Request callback is callable (
2766
) - Port
extras/qpsclient.py
to Python 3 (2849
) - Use getfullargspec under the scenes for Python 3 to stop DeprecationWarning (
2862
) - Update deprecated test aliases (
2876
) - Fix
SitemapSpider
support for alternate links (2853
)
- Added missing bullet point for the
AUTOTHROTTLE_TARGET_CONCURRENCY
setting. (2756
) - Update Contributing docs, document new support channels (
2762
, issue:3038) - Include references to Scrapy subreddit in the docs
- Fix broken links; use https:// for external links (
2978
,2982
,2958
) - Document CloseSpider extension better (
2759
) - Use
pymongo.collection.Collection.insert_one()
in MongoDB example (2781
) - Spelling mistake and typos (
2828
,2837
,2884
,2924
) - Clarify
CSVFeedSpider.headers
documentation (2826
) - Document
DontCloseSpider
exception and clarifyspider_idle
(2791
) - Update "Releases" section in README (
2764
) - Fix rst syntax in
DOWNLOAD_FAIL_ON_DATALOSS
docs (2763
) - Small fix in description of startproject arguments (
2866
) - Clarify data types in Response.body docs (
2922
) - Add a note about
request.meta['depth']
to DepthMiddleware docs (2374
) - Add a note about
request.meta['dont_merge_cookies']
to CookiesMiddleware docs (2999
) - Up-to-date example of project structure (
2964
,2976
) - A better example of ItemExporters usage (
2989
) - Document
from_crawler
methods for spider and downloader middlewares (3019
)
Scrapy 1.4 does not bring that many breathtaking new features but quite a few handy improvements nonetheless.
Scrapy now supports anonymous FTP sessions with customizable user and password via the new FTP_USER
and FTP_PASSWORD
settings. And if you're using Twisted version 17.1.0 or above, FTP is now available with Python 3.
There's a new response.follow <scrapy.http.TextResponse.follow>
method for creating requests; it is now a recommended way to create Requests in Scrapy spiders. This method makes it easier to write correct spiders; response.follow
has several advantages over creating scrapy.Request
objects directly:
- it handles relative URLs;
- it works properly with non-ascii URLs on non-UTF8 pages;
- in addition to absolute and relative URLs it supports Selectors; for
<a>
elements it can also extract their href values.
For example, instead of this:
for href in response.css('li.page a::attr(href)').extract():
url = response.urljoin(href)
yield scrapy.Request(url, self.parse, encoding=response.encoding)
One can now write this:
for a in response.css('li.page a'):
yield response.follow(a, self.parse)
Link extractors are also improved. They work similarly to what a regular modern browser would do: leading and trailing whitespace are removed from attributes (think href=" http://example.com"
) when building Link
objects. This whitespace-stripping also happens for action
attributes with FormRequest
.
Please also note that link extractors do not canonicalize URLs by default anymore. This was puzzling users every now and then, and it's not what browsers do in fact, so we removed that extra transformation on extracted links.
For those of you wanting more control on the Referer:
header that Scrapy sends when following links, you can set your own Referrer Policy
. Prior to Scrapy 1.4, the default RefererMiddleware
would simply and blindly set it to the URL of the response that generated the HTTP request (which could leak information on your URL seeds). By default, Scrapy now behaves much like your regular browser does. And this policy is fully customizable with W3C standard values (or with something really custom of your own if you wish). See REFERRER_POLICY
for details.
To make Scrapy spiders easier to debug, Scrapy logs more stats by default in 1.4: memory usage stats, detailed retry stats, detailed HTTP error code stats. A similar change is that HTTP cache path is also visible in logs now.
Last but not least, Scrapy now has the option to make JSON and XML items more human-readable, with newlines between items and even custom indenting offset, using the new FEED_EXPORT_INDENT
setting.
Enjoy! (Or read on for the rest of changes in this release.)
- Default to
canonicalize=False
inscrapy.linkextractors.LinkExtractor <scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor>
(2537
, fixes1941
and1982
): warning, this is technically backward-incompatible - Enable memusage extension by default (
2539
, fixes2187
); this is technically backward-incompatible so please check if you have any non-defaultMEMUSAGE_***
options set. EDITOR
environment variable now takes precedence overEDITOR
option defined in settings.py (1829
); Scrapy default settings no longer depend on environment variables. This is technically a backward incompatible change.Spider.make_requests_from_url
is deprecated (1728
, fixes1495
).
- Accept proxy credentials in
proxy
request meta key (2526
) - Support brotli-compressed content; requires optional brotlipy (
2535
) - New
response.follow <response-follow-example>
shortcut for creating requests (1940
) - Added
flags
argument and attribute toRequest <scrapy.http.Request>
objects (2047
) - Support Anonymous FTP (
2342
) - Added
retry/count
,retry/max_reached
andretry/reason_count/<reason>
stats toRetryMiddleware <scrapy.downloadermiddlewares.retry.RetryMiddleware>
(2543
) - Added
httperror/response_ignored_count
andhttperror/response_ignored_status_count/<status>
stats toHttpErrorMiddleware <scrapy.spidermiddlewares.httperror.HttpErrorMiddleware>
(2566
) - Customizable
Referrer policy <REFERRER_POLICY>
inRefererMiddleware <scrapy.spidermiddlewares.referer.RefererMiddleware>
(2306
) - New
data:
URI download handler (2334
, fixes2156
) - Log cache directory when HTTP Cache is used (
2611
, fixes2604
) - Warn users when project contains duplicate spider names (fixes
2181
) scrapy.utils.datatypes.CaselessDict
now acceptsMapping
instances and not only dicts (2646
)Media downloads <topics-media-pipeline>
, with~scrapy.pipelines.files.FilesPipeline
or~scrapy.pipelines.images.ImagesPipeline
, can now optionally handle HTTP redirects using the newMEDIA_ALLOW_REDIRECTS
setting (2616
, fixes2004
)- Accept non-complete responses from websites using a new
DOWNLOAD_FAIL_ON_DATALOSS
setting (2590
, fixes2586
) - Optional pretty-printing of JSON and XML items via
FEED_EXPORT_INDENT
setting (2456
, fixes1327
) - Allow dropping fields in
FormRequest.from_response
formdata whenNone
value is passed (667
) - Per-request retry times with the new
max_retry_times
meta key (2642
) python -m scrapy
as a more explicit alternative toscrapy
command (2740
)
- LinkExtractor now strips leading and trailing whitespaces from attributes (
2547
, fixes1614
) - Properly handle whitespaces in action attribute in
~scrapy.http.FormRequest
(2548
) - Buffer CONNECT response bytes from proxy until all HTTP headers are received (
2495
, fixes2491
) - FTP downloader now works on Python 3, provided you use Twisted>=17.1 (
2599
) - Use body to choose response type after decompressing content (
2393
, fixes2145
) - Always decompress
Content-Encoding: gzip
atHttpCompressionMiddleware <scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware>
stage (2391
) - Respect custom log level in
Spider.custom_settings
(2581
, fixes1612
) - 'make htmlview' fix for macOS (
2661
) - Remove "commands" from the command list (
2695
) - Fix duplicate Content-Length header for POST requests with empty body (
2677
) - Properly cancel large downloads, i.e. above
DOWNLOAD_MAXSIZE
(1616
) - ImagesPipeline: fixed processing of transparent PNG images with palette (
2675
)
- Tests: remove temp files and folders (
2570
), fixed ProjectUtilsTest on macOS (2569
), use portable pypy for Linux on Travis CI (2710
) - Separate building request from
_requests_to_follow
in CrawlSpider (2562
) - Remove “Python 3 progress” badge (
2567
) - Add a couple more lines to
.gitignore
(2557
) - Remove bumpversion prerelease configuration (
2159
) - Add codecov.yml file (
2750
) - Set context factory implementation based on Twisted version (
2577
, fixes2560
) - Add omitted
self
arguments in default project middleware template (2595
) - Remove redundant
slot.add_request()
call in ExecutionEngine (2617
) - Catch more specific
os.error
exception inscrapy.pipelines.files.FSFilesStore
(2644
) - Change "localhost" test server certificate (
2720
) - Remove unused
MEMUSAGE_REPORT
setting (2576
)
- Binary mode is required for exporters (
2564
, fixes2553
) - Mention issue with
FormRequest.from_response <scrapy.http.FormRequest.from_response>
due to bug in lxml (2572
) - Use single quotes uniformly in templates (
2596
) - Document
ftp_user
andftp_password
meta keys (2587
) - Removed section on deprecated
contrib/
(2636
) - Recommend Anaconda when installing Scrapy on Windows (
2477
, fixes2475
) - FAQ: rewrite note on Python 3 support on Windows (
2690
) - Rearrange selector sections (
2705
) - Remove
__nonzero__
from~scrapy.selector.SelectorList
docs (2683
) - Mention how to disable request filtering in documentation of
DUPEFILTER_CLASS
setting (2714
) - Add sphinx_rtd_theme to docs setup readme (
2668
) - Open file in text mode in JSON item writer example (
2729
) - Clarify
allowed_domains
example (2670
)
- Make
SpiderLoader
raiseImportError
again by default for missing dependencies and wrongSPIDER_MODULES
. These exceptions were silenced as warnings since 1.3.0. A new setting is introduced to toggle between warning or exception if needed ; seeSPIDER_LOADER_WARN_ONLY
for details.
- Preserve request class when converting to/from dicts (utils.reqser) (
2510
). - Use consistent selectors for author field in tutorial (
2551
). - Fix TLS compatibility in Twisted 17+ (
2558
)
- Support
'True'
and'False'
string values for boolean settings (2519
); you can now do something likescrapy crawl myspider -s REDIRECT_ENABLED=False
. - Support kwargs with
response.xpath()
to useXPath variables <topics-selectors-xpath-variables>
and ad-hoc namespaces declarations ; this requires at least Parsel v1.1 (2457
). - Add support for Python 3.6 (
2485
). - Run tests on PyPy (warning: some tests still fail, so PyPy is not supported yet).
- Enforce
DNS_TIMEOUT
setting (2496
). - Fix
view
command ; it was a regression in v1.3.0 (2503
). - Fix tests regarding
*_EXPIRES settings
with Files/Images pipelines (2460
). - Fix name of generated pipeline class when using basic project template (
2466
). - Fix compatibility with Twisted 17+ (
2496
,2528
). - Fix
scrapy.Item
inheritance on Python 3.6 (2511
). - Enforce numeric values for components order in
SPIDER_MIDDLEWARES
,DOWNLOADER_MIDDLEWARES
,EXTENSIONS
andSPIDER_CONTRACTS
(2420
).
- Reword Code of Conduct section and upgrade to Contributor Covenant v1.4 (
2469
). - Clarify that passing spider arguments converts them to spider attributes (
2483
). - Document
formid
argument onFormRequest.from_response()
(2497
). - Add .rst extension to README files (
2507
). - Mention LevelDB cache storage backend (
2525
). - Use
yield
in sample callback code (2533
). - Add note about HTML entities decoding with
.re()/.re_first()
(1704
). - Typos (
2512
,2534
,2531
).
- Remove redundant check in
MetaRefreshMiddleware
(2542
). - Faster checks in
LinkExtractor
for allow/deny patterns (2538
). - Remove dead code supporting old Twisted versions (
2544
).
This release comes rather soon after 1.2.2 for one main reason: it was found out that releases since 0.18 up to 1.2.2 (included) use some backported code from Twisted (scrapy.xlib.tx.*
), even if newer Twisted modules are available. Scrapy now uses twisted.web.client
and twisted.internet.endpoints
directly. (See also cleanups below.)
As it is a major change, we wanted to get the bug fix out quickly while not breaking any projects using the 1.2 series.
MailSender
now accepts single strings as values forto
andcc
arguments (2272
)scrapy fetch url
,scrapy shell url
andfetch(url)
inside Scrapy shell now follow HTTP redirections by default (2290
); Seefetch
andshell
for details.HttpErrorMiddleware
now logs errors withINFO
level instead ofDEBUG
; this is technically backward incompatible so please check your log parsers.- By default, logger names now use a long-form path, e.g.
[scrapy.extensions.logstats]
, instead of the shorter "top-level" variant of prior releases (e.g.[scrapy]
); this is backward incompatible if you have log parsers expecting the short logger name part. You can switch back to short logger names usingLOG_SHORT_NAMES
set toTrue
.
- Scrapy now requires Twisted >= 13.1 which is the case for many Linux distributions already.
- As a consequence, we got rid of
scrapy.xlib.tx.*
modules, which copied some of Twisted code for users stuck with an "old" Twisted version ChunkedTransferMiddleware
is deprecated and removed from the default downloader middlewares.
- Packaging fix: disallow unsupported Twisted versions in setup.py
- Fix a cryptic traceback when a pipeline fails on
open_spider()
(2011
) - Fix embedded IPython shell variables (fixing
396
that re-appeared in 1.2.0, fixed in2418
) - A couple of patches when dealing with robots.txt:
- handle (non-standard) relative sitemap URLs (
2390
) - handle non-ASCII URLs and User-Agents in Python 2 (
2373
)
- handle (non-standard) relative sitemap URLs (
- Document
"download_latency"
key inRequest
'smeta
dict (2033
) - Remove page on (deprecated & unsupported) Ubuntu packages from ToC (
2335
) - A few fixed typos (
2346
,2369
,2369
,2380
) and clarifications (2354
,2325
,2414
)
- Advertize conda-forge as Scrapy's official conda channel (
2387
) - More helpful error messages when trying to use
.css()
or.xpath()
on non-Text Responses (2264
) startproject
command now generates a samplemiddlewares.py
file (2335
)- Add more dependencies' version info in
scrapy version
verbose output (2404
) - Remove all
*.pyc
files from source distribution (2386
)
- Include OpenSSL's more permissive default ciphers when establishing TLS/SSL connections (
2314
). - Fix "Location" HTTP header decoding on non-ASCII URL redirects (
2321
).
- Fix JsonWriterPipeline example (
2302
). - Various notes:
2330
on spider names,2329
on middleware methods processing order,2327
on getting multi-valued HTTP headers as lists.
- Removed
www.
fromstart_urls
in built-in spider templates (2299
).
- New
FEED_EXPORT_ENCODING
setting to customize the encoding used when writing items to a file. This can be used to turn off\uXXXX
escapes in JSON output. This is also useful for those wanting something else than UTF-8 for XML or CSV output (2034
). startproject
command now supports an optional destination directory to override the default one based on the project name (2005
).- New
SCHEDULER_DEBUG
setting to log requests serialization failures (1610
). - JSON encoder now supports serialization of
set
instances (2058
). - Interpret
application/json-amazonui-streaming
asTextResponse
(1503
). scrapy
is imported by default when using shell tools (shell
,inspect_response <topics-shell-inspect-response>
) (2248
).
- DefaultRequestHeaders middleware now runs before UserAgent middleware (
2088
). Warning: this is technically backward incompatible, though we consider this a bug fix. - HTTP cache extension and plugins that use the
.scrapy
data directory now work outside projects (1581
). Warning: this is technically backward incompatible, though we consider this a bug fix. Selector
does not allow passing bothresponse
andtext
anymore (2153
).- Fixed logging of wrong callback name with
scrapy parse
(2169
). - Fix for an odd gzip decompression bug (
1606
). - Fix for selected callbacks when using
CrawlSpider
withscrapy parse <parse>
(2225
). - Fix for invalid JSON and XML files when spider yields no items (
872
). - Implement
flush()
forStreamLogger
avoiding a warning in logs (2125
).
canonicalize_url
has been moved to w3lib.url (2168
).
Scrapy's new requirements baseline is Debian 8 "Jessie". It was previously Ubuntu 12.04 Precise. What this means in practice is that we run continuous integration tests with these (main) packages versions at a minimum: Twisted 14.0, pyOpenSSL 0.14, lxml 3.4.
Scrapy may very well work with older versions of these packages (the code base still has switches for older Twisted versions for example) but it is not guaranteed (because it's not tested anymore).
- Grammar fixes:
2128
,1566
. - Download stats badge removed from README (
2160
). - New Scrapy
architecture diagram <topics-architecture>
(2165
). - Updated
Response
parameters documentation (2197
). - Reworded misleading
RANDOMIZE_DOWNLOAD_DELAY
description (2190
). - Add StackOverflow as a support channel (
2257
).
- Packaging fix: disallow unsupported Twisted versions in setup.py
- Class attributes for subclasses of
ImagesPipeline
andFilesPipeline
work as they did before 1.1.1 (2243
, fixes2198
)
Overview <intro-overview>
andtutorial <intro-tutorial>
rewritten to use http://toscrape.com websites (2236
,2249
,2252
).
- Introduce a missing
IMAGES_STORE_S3_ACL
setting to override the default ACL policy inImagesPipeline
when uploading images to S3 (note that default ACL policy is "private" -- instead of "public-read" --since Scrapy 1.1.0) IMAGES_EXPIRES
default value set back to 90 (the regression was introduced in 1.1.1)
- Add "Host" header in CONNECT requests to HTTPS proxies (
2069
) - Use response
body
when choosing response class (2001
, fixes2000
) - Do not fail on canonicalizing URLs with wrong netlocs (
2038
, fixes2010
) - a few fixes for
HttpCompressionMiddleware
(andSitemapSpider
):- Do not decode HEAD responses (
2008
, fixes1899
) - Handle charset parameter in gzip Content-Type header (
2050
, fixes2049
) - Do not decompress gzip octet-stream responses (
2065
, fixes2063
)
- Do not decode HEAD responses (
- Catch (and ignore with a warning) exception when verifying certificate against IP-address hosts (
2094
, fixes2092
) - Make
FilesPipeline
andImagesPipeline
backward compatible again regarding the use of legacy class attributes for customization (1989
, fixes1985
)
- Enable genspider command outside project folder (
2052
) - Retry HTTPS CONNECT
TunnelError
by default (1974
)
FEED_TEMPDIR
setting at lexicographical position (9b3c72c
)- Use idiomatic
.extract_first()
in overview (1994
) - Update years in copyright notice (
c2c8036
) - Add information and example on errbacks (
1995
) - Use "url" variable in downloader middleware example (
2015
) - Grammar fixes (
2054
,2120
) - New FAQ entry on using BeautifulSoup in spider callbacks (
2048
) - Add notes about Scrapy not working on Windows with Python 3 (
2060
) - Encourage complete titles in pull requests (
2026
)
- Upgrade py.test requirement on Travis CI and Pin pytest-cov to 2.2.1 (
2095
)
This 1.1 release brings a lot of interesting features and bug fixes:
- Scrapy 1.1 has beta Python 3 support (requires Twisted >= 15.5). See
news_betapy3
for more details and some limitations. - Hot new features:
- Item loaders now support nested loaders (
1467
). FormRequest.from_response
improvements (1382
,1137
).- Added setting
AUTOTHROTTLE_TARGET_CONCURRENCY
and improved AutoThrottle docs (1324
). - Added
response.text
to get body as unicode (1730
). - Anonymous S3 connections (
1358
). - Deferreds in downloader middlewares (
1473
). This enables better robots.txt handling (1471
). - HTTP caching now follows RFC2616 more closely, added settings
HTTPCACHE_ALWAYS_STORE
andHTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS
(1151
). - Selectors were extracted to the parsel library (
1409
). This means you can use Scrapy Selectors without Scrapy and also upgrade the selectors engine without needing to upgrade Scrapy. - HTTPS downloader now does TLS protocol negotiation by default, instead of forcing TLS 1.0. You can also set the SSL/TLS method using the new
DOWNLOADER_CLIENT_TLS_METHOD
.
- Item loaders now support nested loaders (
- These bug fixes may require your attention:
- Don't retry bad requests (HTTP 400) by default (
1289
). If you need the old behavior, add400
toRETRY_HTTP_CODES
. - Fix shell files argument handling (
1710
,1550
). If you tryscrapy shell index.html
it will try to load the URL http://index.html, usescrapy shell ./index.html
to load a local file. - Robots.txt compliance is now enabled by default for newly-created projects (
1724
). Scrapy will also wait for robots.txt to be downloaded before proceeding with the crawl (1735
). If you want to disable this behavior, updateROBOTSTXT_OBEY
insettings.py
file after creating a new project. - Exporters now work on unicode, instead of bytes by default (
1080
). If you use~scrapy.exporters.PythonItemExporter
, you may want to update your code to disable binary mode which is now deprecated. - Accept XML node names containing dots as valid (
1533
). - When uploading files or images to S3 (with
FilesPipeline
orImagesPipeline
), the default ACL policy is now "private" instead of "public" Warning: backward incompatible!. You can useFILES_STORE_S3_ACL
to change it. - We've reimplemented
canonicalize_url()
for more correct output, especially for URLs with non-ASCII characters (1947
). This could change link extractors output compared to previous Scrapy versions. This may also invalidate some cache entries you could still have from pre-1.1 runs. Warning: backward incompatible!.
- Don't retry bad requests (HTTP 400) by default (
Keep reading for more details on other improvements and bug fixes.
We have been hard at work to make Scrapy run on Python 3. As a result, now you can run spiders on Python 3.3, 3.4 and 3.5 (Twisted >= 15.5 required). Some features are still missing (and some may never be ported).
Almost all builtin extensions/middlewares are expected to work. However, we are aware of some limitations in Python 3:
- Scrapy does not work on Windows with Python 3
- Sending emails is not supported
- FTP download handler is not supported
- Telnet console is not supported
- Scrapy now has a Code of Conduct (
1681
). - Command line tool now has completion for zsh (
934
). - Improvements to
scrapy shell
:- Support for bpython and configure preferred Python shell via
SCRAPY_PYTHON_SHELL
(1100
,1444
). - Support URLs without scheme (
1498
) Warning: backward incompatible! - Bring back support for relative file path (
1710
,1550
).
- Support for bpython and configure preferred Python shell via
- Added
MEMUSAGE_CHECK_INTERVAL_SECONDS
setting to change default check interval (1282
). - Download handlers are now lazy-loaded on first request using their scheme (
1390
,1421
). - HTTPS download handlers do not force TLS 1.0 anymore; instead, OpenSSL's
SSLv23_method()/TLS_method()
is used allowing to try negotiating with the remote hosts the highest TLS protocol version it can (1794
,1629
). RedirectMiddleware
now skips the status codes fromhandle_httpstatus_list
on spider attribute or inRequest
'smeta
key (1334
,1364
,1447
).- Form submission:
- now works with
<button>
elements too (1469
). - an empty string is now used for submit buttons without a value (
1472
)
- now works with
- Dict-like settings now have per-key priorities (
1135
,1149
and1586
). - Sending non-ASCII emails (
1662
) CloseSpider
andSpiderState
extensions now get disabled if no relevant setting is set (1723
,1725
).- Added method
ExecutionEngine.close
(1423
). - Added method
CrawlerRunner.create_crawler
(1528
). - Scheduler priority queue can now be customized via
SCHEDULER_PRIORITY_QUEUE
(1822
). .pps
links are now ignored by default in link extractors (1835
).- temporary data folder for FTP and S3 feed storages can be customized using a new
FEED_TEMPDIR
setting (1847
). FilesPipeline
andImagesPipeline
settings are now instance attributes instead of class attributes, enabling spider-specific behaviors (1891
).JsonItemExporter
now formats opening and closing square brackets on their own line (first and last lines of output file) (1950
).- If available,
botocore
is used forS3FeedStorage
,S3DownloadHandler
andS3FilesStore
(1761
,1883
). - Tons of documentation updates and related fixes (
1291
,1302
,1335
,1683
,1660
,1642
,1721
,1727
,1879
). - Other refactoring, optimizations and cleanup (
1476
,1481
,1477
,1315
,1290
,1750
,1881
).
- Added
to_bytes
andto_unicode
, deprecatedstr_to_unicode
andunicode_to_str
functions (778
). binary_is_text
is introduced, to replace use ofisbinarytext
(but with inverse return value) (1851
)- The
optional_features
set has been removed (1359
). - The
--lsprof
command line option has been removed (1689
). Warning: backward incompatible, but doesn't break user code. - The following datatypes were deprecated (
1720
):scrapy.utils.datatypes.MultiValueDictKeyError
scrapy.utils.datatypes.MultiValueDict
scrapy.utils.datatypes.SiteNode
- The previously bundled
scrapy.xlib.pydispatch
library was deprecated and replaced by pydispatcher.
telnetconsole
was relocated toextensions/
(1524
).- Note: telnet is not enabled on Python 3 (#1524 (comment))
- Scrapy does not retry requests that got a
HTTP 400 Bad Request
response anymore (1289
). Warning: backward incompatible! - Support empty password for http_proxy config (
1274
). - Interpret
application/x-json
asTextResponse
(1333
). - Support link rel attribute with multiple values (
1201
). - Fixed
scrapy.http.FormRequest.from_response
when there is a<base>
tag (1564
). - Fixed
TEMPLATES_DIR
handling (1575
). - Various
FormRequest
fixes (1595
,1596
,1597
). - Makes
_monkeypatches
more robust (1634
). - Fixed bug on
XMLItemExporter
with non-string fields in items (1738
). - Fixed startproject command in macOS (
1635
). - Fixed
~scrapy.exporters.PythonItemExporter
and CSVExporter for non-string item types (1737
). - Various logging related fixes (
1294
,1419
,1263
,1624
,1654
,1722
,1726
and1303
). - Fixed bug in
utils.template.render_templatefile()
(1212
). - sitemaps extraction from
robots.txt
is now case-insensitive (1902
). - HTTPS+CONNECT tunnels could get mixed up when using multiple proxies to same remote host (
1912
).
- Packaging fix: disallow unsupported Twisted versions in setup.py
- FIX: RetryMiddleware is now robust to non-standard HTTP status codes (
1857
) - FIX: Filestorage HTTP cache was checking wrong modified time (
1875
) - DOC: Support for Sphinx 1.4+ (
1893
) - DOC: Consistency in selectors examples (
1869
)
- FIX: [Backport] Ignore bogus links in LinkExtractors (fixes
907
,108195e
) - TST: Changed buildbot makefile to use 'pytest' (
1f3d90a
) - DOC: Fixed typos in tutorial and media-pipeline (
808a9ea
and803bd87
) - DOC: Add AjaxCrawlMiddleware to DOWNLOADER_MIDDLEWARES_BASE in settings docs (
aa94121
)
- Ignoring xlib/tx folder, depending on Twisted version. (
7dfa979
) - Run on new travis-ci infra (
6e42f0b
) - Spelling fixes (
823a1cc
) - escape nodename in xmliter regex (
da3c155
) - test xml nodename with dots (
4418fc3
) - TST don't use broken Pillow version in tests (
a55078c
) - disable log on version command. closes #1426 (
86fc330
) - disable log on startproject command (
db4c9fe
) - Add PyPI download stats badge (
df2b944
) - don't run tests twice on Travis if a PR is made from a scrapy/scrapy branch (
a83ab41
) - Add Python 3 porting status badge to the README (
73ac80d
) - fixed RFPDupeFilter persistence (
97d080e
) - TST a test to show that dupefilter persistence is not working (
97f2fb3
) - explicit close file on file:// scheme handler (
d9b4850
) - Disable dupefilter in shell (
c0d0734
) - DOC: Add captions to toctrees which appear in sidebar (
aa239ad
) - DOC Removed pywin32 from install instructions as it's already declared as dependency. (
10eb400
) - Added installation notes about using Conda for Windows and other OSes. (
1c3600a
) - Fixed minor grammar issues. (
7f4ddd5
) - fixed a typo in the documentation. (
b71f677
) - Version 1 now exists (
5456c0e
) - fix another invalid xpath error (
0a1366e
) - fix ValueError: Invalid XPath: //div/[id="not-exists"]/text() on selectors.rst (
ca8d60f
) - Typos corrections (
7067117
) - fix typos in downloader-middleware.rst and exceptions.rst, middlware -> middleware (
32f115c
) - Add note to Ubuntu install section about Debian compatibility (
23fda69
) - Replace alternative macOS install workaround with virtualenv (
98b63ee
) - Reference Homebrew's homepage for installation instructions (
1925db1
) - Add oldest supported tox version to contributing docs (
5d10d6d
) - Note in install docs about pip being already included in python>=2.7.9 (
85c980e
) - Add non-python dependencies to Ubuntu install section in the docs (
fbd010d
) - Add macOS installation section to docs (
d8f4cba
) - DOC(ENH): specify path to rtd theme explicitly (
de73b1a
) - minor: scrapy.Spider docs grammar (
1ddcc7b
) - Make common practices sample code match the comments (
1b85bcf
) - nextcall repetitive calls (heartbeats). (
55f7104
) - Backport fix compatibility with Twisted 15.4.0 (
b262411
) - pin pytest to 2.7.3 (
a6535c2
) - Merge pull request #1512 from mgedmin/patch-1 (
8876111
) - Merge pull request #1513 from mgedmin/patch-2 (
5d4daf8
) - Typo (
f8d0682
) - Fix list formatting (
5f83a93
) - fix Scrapy squeue tests after recent changes to queuelib (
3365c01
) - Merge pull request #1475 from rweindl/patch-1 (
2d688cd
) - Update tutorial.rst (
fbc1f25
) - Merge pull request #1449 from rhoekman/patch-1 (
7d6538c
) - Small grammatical change (
8752294
) - Add openssl version to version command (
13c45ac
)
- add service_identity to Scrapy install_requires (
cbc2501
) - Workaround for travis#296 (
66af9cd
)
- Twisted 15.3.0 does not raises PicklingError serializing lambda functions (
b04dd7d
) - Minor method name fix (
6f85c7f
) - minor: scrapy.Spider grammar and clarity (
9c9d2e0
) - Put a blurb about support channels in CONTRIBUTING (
c63882b
) - Fixed typos (
a9ae7b0
) - Fix doc reference. (
7c8a4fe
)
- Unquote request path before passing to FTPClient, it already escape paths (
cc00ad2
) - include tests/ to source distribution in MANIFEST.in (
eca227e
) - DOC Fix SelectJmes documentation (
b8567bc
) - DOC Bring Ubuntu and Archlinux outside of Windows subsection (
392233f
) - DOC remove version suffix from Ubuntu package (
5303c66
) - DOC Update release date for 1.0 (
c89fa29
)
You will find a lot of new features and bugfixes in this major release. Make sure to check our updated overview <intro-overview>
to get a glance of some of the changes, along with our brushed tutorial <intro-tutorial>
.
Declaring and returning Scrapy Items is no longer necessary to collect the scraped data from your spider, you can now return explicit dictionaries instead.
Classic version
class MyItem(scrapy.Item):
url = scrapy.Field()
class MySpider(scrapy.Spider):
def parse(self, response):
return MyItem(url=response.url)
New version
class MySpider(scrapy.Spider):
def parse(self, response):
return {'url': response.url}
Last Google Summer of Code project accomplished an important redesign of the mechanism used for populating settings, introducing explicit priorities to override any given setting. As an extension of that goal, we included a new level of priority for settings that act exclusively for a single spider, allowing them to redefine project settings.
Start using it by defining a ~scrapy.spiders.Spider.custom_settings
class variable in your spider:
class MySpider(scrapy.Spider):
custom_settings = {
"DOWNLOAD_DELAY": 5.0,
"RETRY_ENABLED": False,
}
Read more about settings population: topics-settings
Scrapy 1.0 has moved away from Twisted logging to support Python built in’s as default logging system. We’re maintaining backward compatibility for most of the old custom interface to call logging functions, but you’ll get warnings to switch to the Python logging API entirely.
Old version
from scrapy import log
log.msg('MESSAGE', log.INFO)
New version
import logging
logging.info('MESSAGE')
Logging with spiders remains the same, but on top of the ~scrapy.spiders.Spider.log
method you’ll have access to a custom ~scrapy.spiders.Spider.logger
created for the spider to issue log events:
class MySpider(scrapy.Spider):
def parse(self, response):
self.logger.info('Response received')
Read more in the logging documentation: topics-logging
Another milestone for last Google Summer of Code was a refactoring of the internal API, seeking a simpler and easier usage. Check new core interface in: topics-api
A common situation where you will face these changes is while running Scrapy from scripts. Here’s a quick example of how to run a Spider manually with the new API:
from scrapy.crawler import CrawlerProcess
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})
process.crawl(MySpider)
process.start()
Bear in mind this feature is still under development and its API may change until it reaches a stable status.
See more examples for scripts running Scrapy: topics-practices
There’s been a large rearrangement of modules trying to improve the general structure of Scrapy. Main changes were separating various subpackages into new projects and dissolving both scrapy.contrib
and scrapy.contrib_exp
into top level packages. Backward compatibility was kept among internal relocations, while importing deprecated modules expect warnings indicating their new place.
Outsourced packages
Note
These extensions went through some minor changes, e.g. some setting names were changed. Please check the documentation in each new repository to get familiar with the new usage.
Old location | New location |
---|---|
scrapy.commands.deploy | scrapyd-client <https://github.com /scrapy/scrapyd-client> (See other alternatives here: topics-deploy ) |
scrapy.contrib.djangoitem | scrapy-djangoitem <https://github. com/scrapy-plugins/scrapy-djangoite m> |
scrapy.webservice | scrapy-jsonrpc <https://github.com /scrapy-plugins/scrapy-jsonrpc> |
scrapy.contrib_exp
and scrapy.contrib
dissolutions
Old location | New location |
---|---|
scrapy.contrib_exp.downloadermiddleware.decompression | scrapy.downloadermiddlewares.decompression |
scrapy.contrib_exp.iterators | scrapy.utils.iterators |
scrapy.contrib.downloadermiddleware | scrapy.downloadermiddlewares |
scrapy.contrib.exporter | scrapy.exporters |
scrapy.contrib.linkextractors | scrapy.linkextractors |
scrapy.contrib.loader | scrapy.loader |
scrapy.contrib.loader.processor | scrapy.loader.processors |
scrapy.contrib.pipeline | scrapy.pipelines |
scrapy.contrib.spidermiddleware | scrapy.spidermiddlewares |
scrapy.contrib.spiders | scrapy.spiders |
|
scrapy.extensions.* |
Plural renames and Modules unification
Old location | New location |
---|---|
scrapy.command | scrapy.commands |
scrapy.dupefilter | scrapy.dupefilters |
scrapy.linkextractor | scrapy.linkextractors |
scrapy.spider | scrapy.spiders |
scrapy.squeue | scrapy.squeues |
scrapy.statscol | scrapy.statscollectors |
scrapy.utils.decorator | scrapy.utils.decorators |
Class renames
Old location | New location |
---|---|
scrapy.spidermanager.SpiderManager | scrapy.spiderloader.SpiderLoader |
Settings renames
Old location | New location |
---|---|
SPIDER_MANAGER_CLASS | SPIDER_LOADER_CLASS |
New Features and Enhancements
- Python logging (
1060
,1235
,1236
,1240
,1259
,1278
,1286
) - FEED_EXPORT_FIELDS option (
1159
,1224
) - Dns cache size and timeout options (
1132
) - support namespace prefix in xmliter_lxml (
963
) - Reactor threadpool max size setting (
1123
) - Allow spiders to return dicts. (
1081
) - Add Response.urljoin() helper (
1086
) - look in ~/.config/scrapy.cfg for user config (
1098
) - handle TLS SNI (
1101
) - Selectorlist extract first (
624
,1145
) - Added JmesSelect (
1016
) - add gzip compression to filesystem http cache backend (
1020
) - CSS support in link extractors (
983
) - httpcache dont_cache meta #19 #689 (
821
) - add signal to be sent when request is dropped by the scheduler (
961
) - avoid download large response (
946
) - Allow to specify the quotechar in CSVFeedSpider (
882
) - Add referer to "Spider error processing" log message (
795
) - process robots.txt once (
896
) - GSoC Per-spider settings (
854
) - Add project name validation (
817
) - GSoC API cleanup (
816
,1128
,1147
,1148
,1156
,1185
,1187
,1258
,1268
,1276
,1285
,1284
) - Be more responsive with IO operations (
1074
and1075
) - Do leveldb compaction for httpcache on closing (
1297
)
Deprecations and Removals
- Deprecate htmlparser link extractor (
1205
) - remove deprecated code from FeedExporter (
1155
) - a leftover for.15 compatibility (
925
) - drop support for CONCURRENT_REQUESTS_PER_SPIDER (
895
) - Drop old engine code (
911
) - Deprecate SgmlLinkExtractor (
777
)
Relocations
- Move exporters/__init__.py to exporters.py (
1242
) - Move base classes to their packages (
1218
,1233
) - Module relocation (
1181
,1210
) - rename SpiderManager to SpiderLoader (
1166
) - Remove djangoitem (
1177
) - remove scrapy deploy command (
1102
) - dissolve contrib_exp (
1134
) - Deleted bin folder from root, fixes #913 (
914
) - Remove jsonrpc based webservice (
859
) - Move Test cases under project root dir (
827
,841
) - Fix backward incompatibility for relocated paths in settings (
1267
)
Documentation
- CrawlerProcess documentation (
1190
) - Favoring web scraping over screen scraping in the descriptions (
1188
) - Some improvements for Scrapy tutorial (
1180
) - Documenting Files Pipeline together with Images Pipeline (
1150
) - deployment docs tweaks (
1164
) - Added deployment section covering scrapyd-deploy and shub (
1124
) - Adding more settings to project template (
1073
) - some improvements to overview page (
1106
) - Updated link in docs/topics/architecture.rst (
647
) - DOC reorder topics (
1022
) - updating list of Request.meta special keys (
1071
) - DOC document download_timeout (
898
) - DOC simplify extension docs (
893
) - Leaks docs (
894
) - DOC document from_crawler method for item pipelines (
904
) - Spider_error doesn't support deferreds (
1292
) - Corrections & Sphinx related fixes (
1220
,1219
,1196
,1172
,1171
,1169
,1160
,1154
,1127
,1112
,1105
,1041
,1082
,1033
,944
,866
,864
,796
,1260
,1271
,1293
,1298
)
Bugfixes
- Item multi inheritance fix (
353
,1228
) - ItemLoader.load_item: iterate over copy of fields (
722
) - Fix Unhandled error in Deferred (RobotsTxtMiddleware) (
1131
,1197
) - Force to read DOWNLOAD_TIMEOUT as int (
954
) - scrapy.utils.misc.load_object should print full traceback (
902
) - Fix bug for ".local" host name (
878
) - Fix for Enabled extensions, middlewares, pipelines info not printed anymore (
879
) - fix dont_merge_cookies bad behaviour when set to false on meta (
846
)
Python 3 In Progress Support
- disable scrapy.telnet if twisted.conch is not available (
1161
) - fix Python 3 syntax errors in ajaxcrawl.py (
1162
) - more python3 compatibility changes for urllib (
1121
) - assertItemsEqual was renamed to assertCountEqual in Python 3. (
1070
) - Import unittest.mock if available. (
1066
) - updated deprecated cgi.parse_qsl to use six's parse_qsl (
909
) - Prevent Python 3 port regressions (
830
) - PY3: use MutableMapping for python 3 (
810
) - PY3: use six.BytesIO and six.moves.cStringIO (
803
) - PY3: fix xmlrpclib and email imports (
801
) - PY3: use six for robotparser and urlparse (
800
) - PY3: use six.iterkeys, six.iteritems, and tempfile (
799
) - PY3: fix has_key and use six.moves.configparser (
798
) - PY3: use six.moves.cPickle (
797
) - PY3 make it possible to run some tests in Python3 (
776
)
Tests
- remove unnecessary lines from py3-ignores (
1243
) - Fix remaining warnings from pytest while collecting tests (
1206
) - Add docs build to travis (
1234
) - TST don't collect tests from deprecated modules. (
1165
) - install service_identity package in tests to prevent warnings (
1168
) - Fix deprecated settings API in tests (
1152
) - Add test for webclient with POST method and no body given (
1089
) - py3-ignores.txt supports comments (
1044
) - modernize some of the asserts (
835
) - selector.__repr__ test (
779
)
Code refactoring
- CSVFeedSpider cleanup: use iterate_spider_output (
1079
) - remove unnecessary check from scrapy.utils.spider.iter_spider_output (
1078
) - Pydispatch pep8 (
992
) - Removed unused 'load=False' parameter from walk_modules() (
871
) - For consistency, use
job_dir
helper inSpiderState
extension. (805
) - rename "sflo" local variables to less cryptic "log_observer" (
775
)
- encode invalid xpath with unicode_escape under PY2 (
07cb3e5
) - fix IPython shell scope issue and load IPython user config (
2c8e573
) - Fix small typo in the docs (
d694019
) - Fix small typo (
f92fa83
) - Converted sel.xpath() calls to response.xpath() in Extracting the data (
c2c6d15
)
- Support new _getEndpoint Agent signatures on Twisted 15.0.0 (
540b9bc
) - DOC a couple more references are fixed (
b4c454b
) - DOC fix a reference (
e3c1260
) - t.i.b.ThreadedResolver is now a new-style class (
9e13f42
) - S3DownloadHandler: fix auth for requests with quoted paths/query params (
cdb9a0b
) - fixed the variable types in mailsender documentation (
bb3a848
) - Reset items_scraped instead of item_count (
edb07a4
) - Tentative attention message about what document to read for contributions (
7ee6f7a
) - mitmproxy 0.10.1 needs netlib 0.10.1 too (
874fcdd
) - pin mitmproxy 0.10.1 as >0.11 does not work with tests (
c6b21f0
) - Test the parse command locally instead of against an external url (
c3a6628
) - Patches Twisted issue while closing the connection pool on HTTPDownloadHandler (
d0bf957
) - Updates documentation on dynamic item classes. (
eeb589a
) - Merge pull request #943 from Lazar-T/patch-3 (
5fdab02
) - typo (
b0ae199
) - pywin32 is required by Twisted. closes #937 (
5cb0cfb
) - Update install.rst (
781286b
) - Merge pull request #928 from Lazar-T/patch-1 (
b415d04
) - comma instead of fullstop (
627b9ba
) - Merge pull request #885 from jsma/patch-1 (
de909ad
) - Update request-response.rst (
3f3263d
) - SgmlLinkExtractor - fix for parsing <area> tag with Unicode present (
49b40f0
)
- pem file is used by mockserver and required by scrapy bench (
5eddc68
) - scrapy bench needs scrapy.tests* (
d6cb999
)
- no need to waste travis-ci time on py3 for 0.24 (
8e080c1
) - Update installation docs (
1d0c096
) - There is a trove classifier for Scrapy framework! (
4c701d7
) - update other places where w3lib version is mentioned (
d109c13
) - Update w3lib requirement to 1.8.0 (
39d2ce5
) - Use w3lib.html.replace_entities() (remove_entities() is deprecated) (
180d3ad
) - set zip_safe=False (
a51ee8b
) - do not ship tests package (
ee3b371
) - scrapy.bat is not needed anymore (
c3861cf
) - Modernize setup.py (
362e322
) - headers can not handle non-string values (
94a5c65
) - fix ftp test cases (
a274a7f
) - The sum up of travis-ci builds are taking like 50min to complete (
ae1e2cc
) - Update shell.rst typo (
e49c96a
) - removes weird indentation in the shell results (
1ca489d
) - improved explanations, clarified blog post as source, added link for XPath string functions in the spec (
65c8f05
) - renamed UserTimeoutError and ServerTimeouterror #583 (
037f6ab
) - adding some xpath tips to selectors docs (
2d103e0
) - fix tests to account for scrapy/w3lib#23 (
f8d366a
) - get_func_args maximum recursion fix #728 (
81344ea
) - Updated input/output processor example according to #560. (
f7c4ea8
) - Fixed Python syntax in tutorial. (
db59ed9
) - Add test case for tunneling proxy (
f090260
) - Bugfix for leaking Proxy-Authorization header to remote host when using tunneling (
d8793af
) - Extract links from XHTML documents with MIME-Type "application/xml" (
ed1f376
) - Merge pull request #793 from roysc/patch-1 (
91a1106
) - Fix typo in commands.rst (
743e1e2
) - better testcase for settings.overrides.setdefault (
e22daaf
) - Using CRLF as line marker according to http 1.1 definition (
5ec430b
)
- Use a mutable mapping to proxy deprecated settings.overrides and settings.defaults attribute (
e5e8133
) - there is not support for python3 yet (
3cd6146
) - Update python compatible version set to Debian packages (
fa5d76b
) - DOC fix formatting in release notes (
c6a9e20
)
- Fix deprecated CrawlerSettings and increase backward compatibility with .defaults attribute (
8e3f20a
)
- Improve Scrapy top-level namespace (
494
,684
) - Add selector shortcuts to responses (
554
,690
) - Add new lxml based LinkExtractor to replace unmaintained SgmlLinkExtractor (
559
,761
,763
) - Cleanup settings API - part of per-spider settings GSoC project (
737
) - Add UTF8 encoding header to templates (
688
,762
) - Telnet console now binds to 127.0.0.1 by default (
699
) - Update Debian/Ubuntu install instructions (
509
,549
) - Disable smart strings in lxml XPath evaluations (
535
) - Restore filesystem based cache as default for http cache middleware (
541
,500
,571
) - Expose current crawler in Scrapy shell (
557
) - Improve testsuite comparing CSV and XML exporters (
570
) - New
offsite/filtered
andoffsite/domains
stats (566
) - Support process_links as generator in CrawlSpider (
555
) - Verbose logging and new stats counters for DupeFilter (
553
) - Add a mimetype parameter to
MailSender.send()
(602
) - Generalize file pipeline log messages (
622
) - Replace unencodeable codepoints with html entities in SGMLLinkExtractor (
565
) - Converted SEP documents to rst format (
629
,630
,638
,632
,636
,640
,635
,634
,639
,637
,631
,633
,641
,642
) - Tests and docs for clickdata's nr index in FormRequest (
646
,645
) - Allow to disable a downloader handler just like any other component (
650
) - Log when a request is discarded after too many redirections (
654
) - Log error responses if they are not handled by spider callbacks (
612
,656
) - Add content-type check to http compression mw (
193
,660
) - Run pypy tests using latest pypi from ppa (
674
) - Run test suite using pytest instead of trial (
679
) - Build docs and check for dead links in tox environment (
687
) - Make scrapy.version_info a tuple of integers (
681
,692
) - Infer exporter's output format from filename extensions (
546
,659
,760
) - Support case-insensitive domains in
url_is_from_any_domain()
(693
) - Remove pep8 warnings in project and spider templates (
698
) - Tests and docs for
request_fingerprint
function (597
) - Update SEP-19 for GSoC project
per-spider settings
(705
) - Set exit code to non-zero when contracts fails (
727
) - Add a setting to control what class is instantiated as Downloader component (
738
) - Pass response in
item_dropped
signal (724
) - Improve
scrapy check
contracts command (733
,752
) - Document
spider.closed()
shortcut (719
) - Document
request_scheduled
signal (746
) - Add a note about reporting security issues (
697
) - Add LevelDB http cache storage backend (
626
,500
) - Sort spider list output of
scrapy list
command (742
) - Multiple documentation enhancements and fixes (
575
,587
,590
,596
,610
,617
,618
,627
,613
,643
,654
,675
,663
,711
,714
)
- Encode unicode URL value when creating Links in RegexLinkExtractor (
561
) - Ignore None values in ItemLoader processors (
556
) - Fix link text when there is an inner tag in SGMLLinkExtractor and HtmlParserLinkExtractor (
485
,574
) - Fix wrong checks on subclassing of deprecated classes (
581
,584
) - Handle errors caused by inspect.stack() failures (
582
) - Fix a reference to unexistent engine attribute (
593
,594
) - Fix dynamic itemclass example usage of type() (
603
) - Use lucasdemarchi/codespell to fix typos (
628
) - Fix default value of attrs argument in SgmlLinkExtractor to be tuple (
661
) - Fix XXE flaw in sitemap reader (
676
) - Fix engine to support filtered start requests (
707
) - Fix offsite middleware case on urls with no hostnames (
745
) - Testsuite doesn't require PIL anymore (
585
)
- fix a reference to unexistent engine.slots. closes #593 (
13c099a
) - downloaderMW doc typo (spiderMW doc copy remnant) (
8ae11bf
) - Correct typos (
1346037
)
- localhost666 can resolve under certain circumstances (
2ec2279
) - test inspect.stack failure (
cc3eda3
) - Handle cases when inspect.stack() fails (
8cb44f9
) - Fix wrong checks on subclassing of deprecated classes. closes #581 (
46d98d6
) - Docs: 4-space indent for final spider example (
13846de
) - Fix HtmlParserLinkExtractor and tests after #485 merge (
368a946
) - BaseSgmlLinkExtractor: Fixed the missing space when the link has an inner tag (
b566388
) - BaseSgmlLinkExtractor: Added unit test of a link with an inner tag (
c1cb418
) - BaseSgmlLinkExtractor: Fixed unknown_endtag() so that it only set current_link=None when the end tag match the opening tag (
7e4d627
) - Fix tests for Travis-CI build (
76c7e20
) - replace unencodeable codepoints with html entities. fixes #562 and #285 (
5f87b17
) - RegexLinkExtractor: encode URL unicode value when creating Links (
d0ee545
) - Updated the tutorial crawl output with latest output. (
8da65de
) - Updated shell docs with the crawler reference and fixed the actual shell output. (
875b9ab
) - PEP8 minor edits. (
f89efaf
) - Expose current crawler in the Scrapy shell. (
5349cec
) - Unused re import and PEP8 minor edits. (
387f414
) - Ignore None's values when using the ItemLoader. (
0632546
) - DOC Fixed HTTPCACHE_STORAGE typo in the default value which is now Filesystem instead Dbm. (
cde9a8c
) - show Ubuntu setup instructions as literal code (
fb5c9c5
) - Update Ubuntu installation instructions (
70fb105
) - Merge pull request #550 from stray-leone/patch-1 (
6f70b6a
) - modify the version of Scrapy Ubuntu package (
725900d
) - fix 0.22.0 release date (
af0219a
) - fix typos in news.rst and remove (not released yet) header (
b7f58f4
)
- [Backward incompatible] Switched HTTPCacheMiddleware backend to filesystem (
541
) To restore old backend setHTTPCACHE_STORAGE
toscrapy.contrib.httpcache.DbmCacheStorage
- Proxy https:// urls using CONNECT method (
392
,397
) - Add a middleware to crawl ajax crawlable pages as defined by google (
343
) - Rename scrapy.spider.BaseSpider to scrapy.spider.Spider (
510
,519
) - Selectors register EXSLT namespaces by default (
472
) - Unify item loaders similar to selectors renaming (
461
) - Make
RFPDupeFilter
class easily subclassable (533
) - Improve test coverage and forthcoming Python 3 support (
525
) - Promote startup info on settings and middleware to INFO level (
520
) - Support partials in
get_func_args
util (506
, issue:504) - Allow running individual tests via tox (
503
) - Update extensions ignored by link extractors (
498
) - Add middleware methods to get files/images/thumbs paths (
490
) - Improve offsite middleware tests (
478
) - Add a way to skip default Referer header set by RefererMiddleware (
475
) - Do not send
x-gzip
in defaultAccept-Encoding
header (469
) - Support defining http error handling using settings (
466
) - Use modern python idioms wherever you find legacies (
497
) - Improve and correct documentation (
527
,524
,521
,517
,512
,505
,502
,489
,465
,460
,425
,536
)
- Update Selector class imports in CrawlSpider template (
484
) - Fix unexistent reference to
engine.slots
(464
) - Do not try to call
body_as_unicode()
on a non-TextResponse instance (462
) - Warn when subclassing XPathItemLoader, previously it only warned on instantiation. (
523
) - Warn when subclassing XPathSelector, previously it only warned on instantiation. (
537
) - Multiple fixes to memory stats (
531
,530
,529
) - Fix overriding url in
FormRequest.from_response()
(507
) - Fix tests runner under pip 1.5 (
513
) - Fix logging error when spider name is unicode (
479
)
- Update CrawlSpider Template with Selector changes (
6d1457d
) - fix method name in tutorial. closes GH-480 (
b4fc359
- include_package_data is required to build wheels from published sources (
5ba1ad5
) - process_parallel was leaking the failures on its internal deferreds. closes #458 (
419a780
)
- New Selector's API including CSS selectors (
395
and426
), - Request/Response url/body attributes are now immutable (modifying them had been deprecated for a long time)
ITEM_PIPELINES
is now defined as a dict (instead of a list)- Sitemap spider can fetch alternate URLs (
360
) Selector.remove_namespaces()
now remove namespaces from element's attributes. (416
)- Paved the road for Python 3.3+ (
435
,436
,431
,452
) - New item exporter using native python types with nesting support (
366
) - Tune HTTP1.1 pool size so it matches concurrency defined by settings (
b43b5f575
) - scrapy.mail.MailSender now can connect over TLS or upgrade using STARTTLS (
327
) - New FilesPipeline with functionality factored out from ImagesPipeline (
370
,409
) - Recommend Pillow instead of PIL for image handling (
317
) - Added Debian packages for Ubuntu Quantal and Raring (
86230c0
) - Mock server (used for tests) can listen for HTTPS requests (
410
) - Remove multi spider support from multiple core components (
422
,421
,420
,419
,423
,418
) - Travis-CI now tests Scrapy changes against development versions of
w3lib
andqueuelib
python packages. - Add pypy 2.1 to continuous integration tests (
ecfa7431
) - Pylinted, pep8 and removed old-style exceptions from source (
430
,432
) - Use importlib for parametric imports (
445
) - Handle a regression introduced in Python 2.7.5 that affects XmlItemExporter (
372
) - Bugfix crawling shutdown on SIGINT (
450
) - Do not submit
reset
type inputs in FormRequest.from_response (b326b87
) - Do not silence download errors when request errback raises an exception (
684cfc0
)
- Fix tests under Django 1.6 (
b6bed44c
) - Lot of bugfixes to retry middleware under disconnections using HTTP 1.1 download handler
- Fix inconsistencies among Twisted releases (
406
) - Fix Scrapy shell bugs (
418
,407
) - Fix invalid variable name in setup.py (
429
) - Fix tutorial references (
387
) - Improve request-response docs (
391
) - Improve best practices docs (
399
,400
,401
,402
) - Improve django integration docs (
404
) - Document
bindaddress
request meta (37c24e01d7
) - Improve
Request
class documentation (226
)
- Dropped Python 2.6 support (
448
) - Add
cssselect <cssselect:index>
python package as install dependency - Drop libxml2 and multi selector's backend support, lxml is required from now on.
- Minimum Twisted version increased to 10.0.0, dropped Twisted 8.0 support.
- Running test suite now requires
mock
python library (390
)
Thanks to everyone who contribute to this release!
List of contributors sorted by number of commits:
69 Daniel Graña <dangra@...>
37 Pablo Hoffman <pablo@...>
13 Mikhail Korobov <kmike84@...>
9 Alex Cepoi <alex.cepoi@...>
9 alexanderlukanin13 <alexander.lukanin.13@...>
8 Rolando Espinoza La fuente <darkrho@...>
8 Lukasz Biedrycki <lukasz.biedrycki@...>
6 Nicolas Ramirez <nramirez.uy@...>
3 Paul Tremberth <paul.tremberth@...>
2 Martin Olveyra <molveyra@...>
2 Stefan <misc@...>
2 Rolando Espinoza <darkrho@...>
2 Loren Davie <loren@...>
2 irgmedeiros <irgmedeiros@...>
1 Stefan Koch <taikano@...>
1 Stefan <cct@...>
1 scraperdragon <dragon@...>
1 Kumara Tharmalingam <ktharmal@...>
1 Francesco Piccinno <stack.box@...>
1 Marcos Campal <duendex@...>
1 Dragon Dave <dragon@...>
1 Capi Etheriel <barraponto@...>
1 cacovsky <amarquesferraz@...>
1 Berend Iwema <berend@...>
- IPython refuses to update the namespace. fix #396 (
3d32c4f
) - Fix AlreadyCalledError replacing a request in shell command. closes #407 (
b1d8919
) - Fix start_requests laziness and early hangs (
89faf52
)
- fix regression on lazy evaluation of start requests (
12693a5
) - forms: do not submit reset inputs (
e429f63
) - increase unittest timeouts to decrease travis false positive failures (
912202e
) - backport master fixes to json exporter (
cfc2d46
) - Fix permission and set umask before generating sdist tarball (
06149e0
)
- Backport
scrapy check
command fixes and backward compatible multi crawler process(339
)
- remove extra import added by cherry picked changes (
d20304e
) - fix crawling tests under twisted pre 11.0.0 (
1994f38
) - py26 can not format zero length fields {} (
abf756f
) - test PotentiaDataLoss errors on unbound responses (
b15470d
) - Treat responses without content-length or Transfer-Encoding as good responses (
c4bf324
) - do no include ResponseFailed if http11 handler is not enabled (
6cbe684
) - New HTTP client wraps connection lost in ResponseFailed exception. fix #373 (
1a20bba
) - limit travis-ci build matrix (
3b01bb8
) - Merge pull request #375 from peterarenot/patch-1 (
fa766d7
) - Fixed so it refers to the correct folder (
3283809
) - added Quantal & Raring to support Ubuntu releases (
1411923
) - fix retry middleware which didn't retry certain connection errors after the upgrade to http1 client, closes GH-373 (
bb35ed0
) - fix XmlItemExporter in Python 2.7.4 and 2.7.5 (
de3e451
) - minor updates to 0.18 release notes (
c45e5f1
) - fix contributors list format (
0b60031
)
- Lot of improvements to testsuite run using Tox, including a way to test on pypi
- Handle GET parameters for AJAX crawlable urls (
3fe2a32
) - Use lxml recover option to parse sitemaps (
347
) - Bugfix cookie merging by hostname and not by netloc (
352
) - Support disabling
HttpCompressionMiddleware
using a flag setting (359
) - Support xml namespaces using
iternodes
parser inXMLFeedSpider
(12
) - Support
dont_cache
request meta flag (19
) - Bugfix
scrapy.utils.gz.gunzip
broken by changes in python 2.7.4 (4dc76e
) - Bugfix url encoding on
SgmlLinkExtractor
(24
) - Bugfix
TakeFirst
processor shouldn't discard zero (0) value (59
) - Support nested items in xml exporter (
66
) - Improve cookies handling performance (
77
) - Log dupe filtered requests once (
105
) - Split redirection middleware into status and meta based middlewares (
78
) - Use HTTP1.1 as default downloader handler (
109
and318
) - Support xpath form selection on
FormRequest.from_response
(185
) - Bugfix unicode decoding error on
SgmlLinkExtractor
(199
) - Bugfix signal dispatching on pypi interpreter (
205
) - Improve request delay and concurrency handling (
206
) - Add RFC2616 cache policy to
HttpCacheMiddleware
(212
) - Allow customization of messages logged by engine (
214
) - Multiples improvements to
DjangoItem
(217
,218
,221
) - Extend Scrapy commands using setuptools entry points (
260
) - Allow spider
allowed_domains
value to be set/tuple (261
) - Support
settings.getdict
(269
) - Simplify internal
scrapy.core.scraper
slot handling (271
) - Added
Item.copy
(290
) - Collect idle downloader slots (
297
) - Add
ftp://
scheme downloader handler (329
) - Added downloader benchmark webserver and spider tools
benchmarking
- Moved persistent (on disk) queues to a separate project (queuelib) which Scrapy now depends on
- Add Scrapy commands using external libraries (
260
) - Added
--pdb
option toscrapy
command line tool - Added
XPathSelector.remove_namespaces <scrapy.selector.Selector.remove_namespaces>
which allows to remove all namespaces from XML documents for convenience (to work with namespace-less XPaths). Documented intopics-selectors
. - Several improvements to spider contracts
- New default middleware named MetaRefreshMiddleware that handles meta-refresh html tag redirections,
- MetaRefreshMiddleware and RedirectMiddleware have different priorities to address #62
- added from_crawler method to spiders
- added system tests with mock server
- more improvements to macOS compatibility (thanks Alex Cepoi)
- several more cleanups to singletons and multi-spider support (thanks Nicolas Ramirez)
- support custom download slots
- added --spider option to "shell" command.
- log overridden settings when Scrapy starts
Thanks to everyone who contribute to this release. Here is a list of contributors sorted by number of commits:
130 Pablo Hoffman <pablo@...>
97 Daniel Graña <dangra@...>
20 Nicolás Ramírez <nramirez.uy@...>
13 Mikhail Korobov <kmike84@...>
12 Pedro Faustino <pedrobandim@...>
11 Steven Almeroth <sroth77@...>
5 Rolando Espinoza La fuente <darkrho@...>
4 Michal Danilak <mimino.coder@...>
4 Alex Cepoi <alex.cepoi@...>
4 Alexandr N Zamaraev (aka tonal) <tonal@...>
3 paul <paul.tremberth@...>
3 Martin Olveyra <molveyra@...>
3 Jordi Llonch <llonchj@...>
3 arijitchakraborty <myself.arijit@...>
2 Shane Evans <shane.evans@...>
2 joehillen <joehillen@...>
2 Hart <HartSimha@...>
2 Dan <ellisd23@...>
1 Zuhao Wan <wanzuhao@...>
1 whodatninja <blake@...>
1 vkrest <v.krestiannykov@...>
1 tpeng <pengtaoo@...>
1 Tom Mortimer-Jones <tom@...>
1 Rocio Aramberri <roschegel@...>
1 Pedro <pedro@...>
1 notsobad <wangxiaohugg@...>
1 Natan L <kuyanatan.nlao@...>
1 Mark Grey <mark.grey@...>
1 Luan <luanpab@...>
1 Libor Nenadál <libor.nenadal@...>
1 Juan M Uys <opyate@...>
1 Jonas Brunsgaard <jonas.brunsgaard@...>
1 Ilya Baryshev <baryshev@...>
1 Hasnain Lakhani <m.hasnain.lakhani@...>
1 Emanuel Schorsch <emschorsch@...>
1 Chris Tilden <chris.tilden@...>
1 Capi Etheriel <barraponto@...>
1 cacovsky <amarquesferraz@...>
1 Berend Iwema <berend@...>
- obey request method when Scrapy deploy is redirected to a new endpoint (
8c4fcee
) - fix inaccurate downloader middleware documentation. refs #280 (
40667cb
) - doc: remove links to diveintopython.org, which is no longer available. closes #246 (
bd58bfa
) - Find form nodes in invalid html5 documents (
e3d6945
) - Fix typo labeling attrs type bool instead of list (
a274276
)
- fixes spelling errors in documentation (
6d2b3aa
) - add doc about disabling an extension. refs #132 (
c90de33
) - Fixed error message formatting. log.err() doesn't support cool formatting and when error occurred, the message was: "ERROR: Error processing %(item)s" (
c16150c
) - lint and improve images pipeline error logging (
56b45fc
) - fixed doc typos (
243be84
) - add documentation topics: Broad Crawls & Common Practices (
1fbb715
) - fix bug in Scrapy parse command when spider is not specified explicitly. closes #209 (
c72e682
) - Update docs/topics/commands.rst (
28eac7a
)
- Remove concurrency limitation when using download delays and still ensure inter-request delays are enforced (
487b9b5
) - add error details when image pipeline fails (
8232569
) - improve macOS compatibility (
8dcf8aa
) - setup.py: use README.rst to populate long_description (
7b5310d
) - doc: removed obsolete references to ClientForm (
80f9bb6
) - correct docs for default storage backend (
2aa491b
) - doc: removed broken proxyhub link from FAQ (
bdf61c4
) - Fixed docs typo in SpiderOpenCloseLogging example (
7184094
)
- Scrapy contracts: python2.6 compat (
a4a9199
) - Scrapy contracts verbose option (
ec41673
) - proper unittest-like output for Scrapy contracts (
86635e4
) - added open_in_browser to debugging doc (
c9b690d
) - removed reference to global Scrapy stats from settings doc (
dd55067
) - Fix SpiderState bug in Windows platforms (
58998f4
)
- fixed LogStats extension, which got broken after a wrong merge before the 0.16 release (
8c780fd
) - better backward compatibility for scrapy.conf.settings (
3403089
) - extended documentation on how to access crawler stats from extensions (
c4da0b5
) - removed .hgtags (no longer needed now that Scrapy uses git) (
d52c188
) - fix dashes under rst headers (
fa4f7f9
) - set release date for 0.16.0 in news (
e292246
)
Scrapy changes:
- added
topics-contracts
, a mechanism for testing spiders in a formal/reproducible way - added options
-o
and-t
to therunspider
command - documented
topics/autothrottle
and added to extensions installed by default. You still need to enable it withAUTOTHROTTLE_ENABLED
- major Stats Collection refactoring: removed separation of global/per-spider stats, removed stats-related signals (
stats_spider_opened
, etc). Stats are much simpler now, backward compatibility is kept on the Stats Collector API and signals. - added
~scrapy.spidermiddlewares.SpiderMiddleware.process_start_requests
method to spider middlewares - dropped Signals singleton. Signals should now be accessed through the Crawler.signals attribute. See the signals documentation for more info.
- dropped Stats Collector singleton. Stats can now be accessed through the Crawler.stats attribute. See the stats collection documentation for more info.
- documented
topics-api
lxml
is now the default selectors backend instead oflibxml2
- ported FormRequest.from_response() to use lxml instead of ClientForm
- removed modules:
scrapy.xlib.BeautifulSoup
andscrapy.xlib.ClientForm
- SitemapSpider: added support for sitemap urls ending in .xml and .xml.gz, even if they advertise a wrong content type (
10ed28b
) - StackTraceDump extension: also dump trackref live references (
fe2ce93
) - nested items now fully supported in JSON and JSONLines exporters
- added
cookiejar
Request meta key to support multiple cookie sessions per spider - decoupled encoding detection code to w3lib.encoding, and ported Scrapy code to use that module
- dropped support for Python 2.5. See https://blog.scrapinghub.com/2012/02/27/scrapy-0-15-dropping-support-for-python-2-5/
- dropped support for Twisted 2.5
- added
REFERER_ENABLED
setting, to control referer middleware - changed default user agent to:
Scrapy/VERSION (+http://scrapy.org)
- removed (undocumented)
HTMLImageLinkExtractor
class fromscrapy.contrib.linkextractors.image
- removed per-spider settings (to be replaced by instantiating multiple crawler objects)
USER_AGENT
spider attribute will no longer work, useuser_agent
attribute insteadDOWNLOAD_TIMEOUT
spider attribute will no longer work, usedownload_timeout
attribute instead- removed
ENCODING_ALIASES
setting, as encoding auto-detection has been moved to the w3lib library - promoted
topics-djangoitem
to main contrib - LogFormatter method now return dicts(instead of strings) to support lazy formatting (
164
,dcef7b0
) - downloader handlers (
DOWNLOAD_HANDLERS
setting) now receive settings as the first argument of the__init__
method - replaced memory usage accounting with (more portable) resource module, removed
scrapy.utils.memory
module - removed signal:
scrapy.mail.mail_sent
- removed
TRACK_REFS
setting, nowtrackrefs <topics-leaks-trackrefs>
is always enabled - DBM is now the default storage backend for HTTP cache middleware
- number of log messages (per level) are now tracked through Scrapy stats (stat name:
log_count/LEVEL
) - number received responses are now tracked through Scrapy stats (stat name:
response_received_count
) - removed
scrapy.log.started
attribute
- added precise to supported Ubuntu distros (
b7e46df
) - fixed bug in json-rpc webservice reported in https://groups.google.com/forum/#!topic/scrapy-users/qgVBmFybNAQ/discussion. also removed no longer supported 'run' command from extras/scrapy-ws.py (
340fbdb
) - meta tag attributes for content-type http equiv can be in any order. #123 (
0cb68af
) - replace "import Image" by more standard "from PIL import Image". closes #88 (
4d17048
) - return trial status as bin/runtests.sh exit value. #118 (
b7b2e7f
)
- forgot to include pydispatch license. #118 (
fd85f9c
) - include egg files used by testsuite in source distribution. #118 (
c897793
) - update docstring in project template to avoid confusion with genspider command, which may be considered as an advanced feature. refs #107 (
2548dcc
) - added note to docs/topics/firebug.rst about google directory being shut down (
668e352
) - don't discard slot when empty, just save in another dict in order to recycle if needed again. (
8e9f607
) - do not fail handling unicode xpaths in libxml2 backed selectors (
b830e95
) - fixed minor mistake in Request objects documentation (
bf3c9ee
) - fixed minor defect in link extractors documentation (
ba14f38
) - removed some obsolete remaining code related to sqlite support in Scrapy (
0665175
)
- move buffer pointing to start of file before computing checksum. refs #92 (
6a5bef2
) - Compute image checksum before persisting images. closes #92 (
9817df1
) - remove leaking references in cached failures (
673a120
) - fixed bug in MemoryUsage extension: get_engine_status() takes exactly 1 argument (0 given) (
11133e9
) - fixed struct.error on http compression middleware. closes #87 (
1423140
) - ajax crawling wasn't expanding for unicode urls (
0de3fb4
) - Catch start_requests iterator errors. refs #83 (
454a21d
) - Speed-up libxml2 XPathSelector (
2fbd662
) - updated versioning doc according to recent changes (
0a070f5
) - scrapyd: fixed documentation link (
2b4e4c3
) - extras/makedeb.py: no longer obtaining version from git (
caffe0e
)
- extras/makedeb.py: no longer obtaining version from git (
caffe0e
) - bumped version to 0.14.1 (
6cb9e1c
) - fixed reference to tutorial directory (
4b86bd6
) - doc: removed duplicated callback argument from Request.replace() (
1aeccdd
) - fixed formatting of scrapyd doc (
8bf19e6
) - Dump stacks for all running threads and fix engine status dumped by StackTraceDump extension (
14a8e6e
) - added comment about why we disable ssl on boto images upload (
5223575
) - SSL handshaking hangs when doing too many parallel connections to S3 (
63d583d
) - change tutorial to follow changes on dmoz site (
bcb3198
) - Avoid _disconnectedDeferred AttributeError exception in Twisted>=11.1.0 (
98f3f87
) - allow spider to set autothrottle max concurrency (
175a4b5
)
- Support for AJAX crawlable urls
- New persistent scheduler that stores requests on disk, allowing to suspend and resume crawls (
2737
) - added
-o
option toscrapy crawl
, a shortcut for dumping scraped items into a file (or standard output using-
) - Added support for passing custom settings to Scrapyd
schedule.json
api (2779
,2783
) - New
ChunkedTransferMiddleware
(enabled by default) to support chunked transfer encoding (2769
) - Add boto 2.0 support for S3 downloader handler (
2763
) - Added marshal to formats supported by feed exports (
2744
) - In request errbacks, offending requests are now received in
failure.request
attribute (2738
) - Big downloader refactoring to support per domain/ip concurrency limits (
2732
) CONCURRENT_REQUESTS_PER_SPIDER
setting has been deprecated and replaced by:CONCURRENT_REQUESTS
,CONCURRENT_REQUESTS_PER_DOMAIN
,CONCURRENT_REQUESTS_PER_IP
- check the documentation for more details
- Big downloader refactoring to support per domain/ip concurrency limits (
- Added builtin caching DNS resolver (
2728
) - Moved Amazon AWS-related components/extensions (SQS spider queue, SimpleDB stats collector) to a separate project: [scaws](https://github.com/scrapinghub/scaws) (
2706
,2714
) - Moved spider queues to scrapyd:
scrapy.spiderqueue
->scrapyd.spiderqueue
(2708
) - Moved sqlite utils to scrapyd:
scrapy.utils.sqlite
->scrapyd.sqlite
(2781
) - Real support for returning iterators on
start_requests()
method. The iterator is now consumed during the crawl when the spider is getting idle (2704
) - Added
REDIRECT_ENABLED
setting to quickly enable/disable the redirect middleware (2697
) - Added
RETRY_ENABLED
setting to quickly enable/disable the retry middleware (2694
) - Added
CloseSpider
exception to manually close spiders (2691
) - Improved encoding detection by adding support for HTML5 meta charset declaration (
2690
) - Refactored close spider behavior to wait for all downloads to finish and be processed by spiders, before closing the spider (
2688
) - Added
SitemapSpider
(see documentation in Spiders page) (2658
) - Added
LogStats
extension for periodically logging basic stats (like crawled pages and scraped items) (2657
) - Make handling of gzipped responses more robust (#319,
2643
). Now Scrapy will try and decompress as much as possible from a gzipped response, instead of failing with anIOError
. - Simplified !MemoryDebugger extension to use stats for dumping memory debugging info (
2639
) - Added new command to edit spiders:
scrapy edit
(2636
) and-e
flag togenspider
command that uses it (2653
) - Changed default representation of items to pretty-printed dicts. (
2631
). This improves default logging by making log more readable in the default case, for both Scraped and Dropped lines. - Added
spider_error
signal (2628
) - Added
COOKIES_ENABLED
setting (2625
) - Stats are now dumped to Scrapy log (default value of
STATS_DUMP
setting has been changed toTrue
). This is to make Scrapy users more aware of Scrapy stats and the data that is collected there. - Added support for dynamically adjusting download delay and maximum concurrent requests (
2599
) - Added new DBM HTTP cache storage backend (
2576
) - Added
listjobs.json
API to Scrapyd (2571
) CsvItemExporter
: addedjoin_multivalued
parameter (2578
)- Added namespace support to
xmliter_lxml
(2552
) - Improved cookies middleware by making
COOKIES_DEBUG
nicer and documenting it (2579
) - Several improvements to Scrapyd and Link extractors
- Merged item passed and item scraped concepts, as they have often proved confusing in the past. This means: (
2630
) - original item_scraped signal was removed
- original item_passed signal was renamed to item_scraped
- old log lines
Scraped Item...
were removed - old log lines
Passed Item...
were renamed toScraped Item...
lines and downgraded toDEBUG
level
- Merged item passed and item scraped concepts, as they have often proved confusing in the past. This means: (
- Removed unused function:
scrapy.utils.request.request_info()
(2577
) - Removed googledir project from
examples/googledir
. There's now a new example project calleddirbot
available on GitHub: https://github.com/scrapy/dirbot - Removed support for default field values in Scrapy items (
2616
) - Removed experimental crawlspider v2 (
2632
) - Removed scheduler middleware to simplify architecture. Duplicates filter is now done in the scheduler itself, using the same dupe filtering class as before (
DUPEFILTER_CLASS
setting) (2640
) - Removed support for passing urls to
scrapy crawl
command (usescrapy parse
instead) (2704
) - Removed deprecated Execution Queue (
2704
) - Removed (undocumented) spider context extension (from scrapy.contrib.spidercontext) (
2780
) - removed
CONCURRENT_SPIDERS
setting (use scrapyd maxproc instead) (2789
) - Renamed attributes of core components: downloader.sites -> downloader.slots, scraper.sites -> scraper.slots (
2717
,2718
) - Renamed setting
CLOSESPIDER_ITEMPASSED
toCLOSESPIDER_ITEMCOUNT
(2655
). Backward compatibility kept.
The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.
- Passed item is now sent in the
item
argument of theitem_passed <item_scraped>
(#273) - Added verbose option to
scrapy version
command, useful for bug reports (#298) - HTTP cache now stored by default in the project data dir (#279)
- Added project data storage directory (#276, #277)
- Documented file structure of Scrapy projects (see command-line tool doc)
- New lxml backend for XPath selectors (#147)
- Per-spider settings (#245)
- Support exit codes to signal errors in Scrapy commands (#248)
- Added
-c
argument toscrapy shell
command - Made
libxml2
optional (#260) - New
deploy
command (#261) - Added
CLOSESPIDER_PAGECOUNT
setting (#253) - Added
CLOSESPIDER_ERRORCOUNT
setting (#254)
- Scrapyd now uses one process per spider
- It stores one log file per spider run, and rotate them keeping the latest 5 logs per spider (by default)
- A minimal web ui was added, available at http://localhost:6800 by default
- There is now a
scrapy server
command to start a Scrapyd server of the current project
- added
HTTPCACHE_ENABLED
setting (False by default) to enable HTTP cache middleware - changed
HTTPCACHE_EXPIRATION_SECS
semantics: now zero means "never expire".
- Deprecated
runserver
command in favor ofserver
command which starts a Scrapyd server. See also: Scrapyd changes - Deprecated
queue
command in favor of using Scrapydschedule.json
API. See also: Scrapyd changes - Removed the !LxmlItemLoader (experimental contrib which never graduated to main contrib)
The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.
- New Scrapy service called
scrapyd
for deploying Scrapy crawlers in production (#218) (documentation available) - Simplified Images pipeline usage which doesn't require subclassing your own images pipeline now (#217)
- Scrapy shell now shows the Scrapy log by default (#206)
- Refactored execution queue in a common base code and pluggable backends called "spider queues" (#220)
- New persistent spider queue (based on SQLite) (#198), available by default, which allows to start Scrapy in server mode and then schedule spiders to run.
- Added documentation for Scrapy command-line tool and all its available sub-commands. (documentation available)
- Feed exporters with pluggable backends (#197) (documentation available)
- Deferred signals (#193)
- Added two new methods to item pipeline open_spider(), close_spider() with deferred support (#195)
- Support for overriding default request headers per spider (#181)
- Replaced default Spider Manager with one with similar functionality but not depending on Twisted Plugins (#186)
- Split Debian package into two packages - the library and the service (#187)
- Scrapy log refactoring (#188)
- New extension for keeping persistent spider contexts among different runs (#203)
- Added
dont_redirect
request.meta key for avoiding redirects (#233) - Added
dont_retry
request.meta key for avoiding retries (#234)
- New
scrapy
command which replaces the oldscrapy-ctl.py
(#199)- there is only one global
scrapy
command now, instead of onescrapy-ctl.py
per project - Added
scrapy.bat
script for running more conveniently from Windows
- there is only one global
- Added bash completion to command-line tool (#210)
- Renamed command
start
torunserver
(#209)
url
andbody
attributes of Request objects are now read-only (#230)Request.copy()
andRequest.replace()
now also copies theircallback
anderrback
attributes (#231)- Removed
UrlFilterMiddleware
fromscrapy.contrib
(already disabled by default) - Offsite middleware doesn't filter out any request coming from a spider that doesn't have a allowed_domains attribute (#225)
- Removed Spider Manager
load()
method. Now spiders are loaded in the__init__
method itself. - Changes to Scrapy Manager (now called "Crawler"):
scrapy.core.manager.ScrapyManager
class renamed toscrapy.crawler.Crawler
scrapy.core.manager.scrapymanager
singleton moved toscrapy.project.crawler
- Moved module:
scrapy.contrib.spidermanager
toscrapy.spidermanager
- Spider Manager singleton moved from
scrapy.spider.spiders
to thespiders` attribute of
scrapy.project.crawler`` singleton. - moved Stats Collector classes: (#204)
scrapy.stats.collector.StatsCollector
toscrapy.statscol.StatsCollector
scrapy.stats.collector.SimpledbStatsCollector
toscrapy.contrib.statscol.SimpledbStatsCollector
- default per-command settings are now specified in the
default_settings
attribute of command object class (#201) - changed arguments of Item pipeline
process_item()
method from(spider, item)
to(item, spider)
- backward compatibility kept (with deprecation warning)
- changed arguments of Item pipeline
- moved
scrapy.core.signals
module toscrapy.signals
- backward compatibility kept (with deprecation warning)
- moved
- moved
scrapy.core.exceptions
module toscrapy.exceptions
- backward compatibility kept (with deprecation warning)
- moved
- added
handles_request()
class method toBaseSpider
- dropped
scrapy.log.exc()
function (usescrapy.log.err()
instead) - dropped
component
argument ofscrapy.log.msg()
function - dropped
scrapy.log.log_level
attribute - Added
from_settings()
class methods to Spider Manager, and Item Pipeline Manager
- Added
HTTPCACHE_IGNORE_SCHEMES
setting to ignore certain schemes on !HttpCacheMiddleware (#225) - Added
SPIDER_QUEUE_CLASS
setting which defines the spider queue to use (#220) - Added
KEEP_ALIVE
setting (#220) - Removed
SERVICE_QUEUE
setting (#220) - Removed
COMMANDS_SETTINGS_MODULE
setting (#201) - Renamed
REQUEST_HANDLERS
toDOWNLOAD_HANDLERS
and make download handlers classes (instead of functions)
The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.
- Added SMTP-AUTH support to scrapy.mail
- New settings added:
MAIL_USER
,MAIL_PASS
(2065
| #149) - Added new scrapy-ctl view command - To view URL in the browser, as seen by Scrapy (
2039
) - Added web service for controlling Scrapy process (this also deprecates the web console. (
2053
| #167) - Support for running Scrapy as a service, for production systems (
1988
,2054
,2055
,2056
,2057
| #168) - Added wrapper induction library (documentation only available in source code for now). (
2011
) - Simplified and improved response encoding support (
1961
,1969
) - Added
LOG_ENCODING
setting (1956
, documentation available) - Added
RANDOMIZE_DOWNLOAD_DELAY
setting (enabled by default) (1923
, doc available) MailSender
is no longer IO-blocking (1955
| #146)- Linkextractors and new Crawlspider now handle relative base tag urls (
1960
| #148) - Several improvements to Item Loaders and processors (
2022
,2023
,2024
,2025
,2026
,2027
,2028
,2029
,2030
) - Added support for adding variables to telnet console (
2047
| #165) - Support for requests without callbacks (
2050
| #166)
- Change
Spider.domain_name
toSpider.name
(SEP-012,1975
) Response.encoding
is now the detected encoding (1961
)HttpErrorMiddleware
now returns None or raises an exception (2006
| #157)scrapy.command
modules relocation (2035
,2036
,2037
)- Added
ExecutionQueue
for feeding spiders to scrape (2034
) - Removed
ExecutionEngine
singleton (2039
) - Ported
S3ImagesStore
(images pipeline) to use boto and threads (2033
) - Moved module:
scrapy.management.telnet
toscrapy.telnet
(2047
)
- Changed default
SCHEDULER_ORDER
toDFO
(1939
)
The numbers like #NNN reference tickets in the old issue tracker (Trac) which is no longer available.
- Added DEFAULT_RESPONSE_ENCODING setting (
1809
) - Added
dont_click
argument toFormRequest.from_response()
method (1813
,1816
) - Added
clickdata
argument toFormRequest.from_response()
method (1802
,1803
) - Added support for HTTP proxies (
HttpProxyMiddleware
) (1781
,1785
) - Offsite spider middleware now logs messages when filtering out requests (
1841
)
- Changed
scrapy.utils.response.get_meta_refresh()
signature (1804
) - Removed deprecated
scrapy.item.ScrapedItem
class - usescrapy.item.Item instead
(1838
) - Removed deprecated
scrapy.xpath
module - usescrapy.selector
instead. (1836
) - Removed deprecated
core.signals.domain_open
signal - usecore.signals.domain_opened
instead (1822
) log.msg()
now receives aspider
argument (1822
)- Old domain argument has been deprecated and will be removed in 0.9. For spiders, you should always use the
spider
argument and pass spider references. If you really want to pass a string, use thecomponent
argument instead.
- Old domain argument has been deprecated and will be removed in 0.9. For spiders, you should always use the
- Changed core signals
domain_opened
,domain_closed
,domain_idle
- Changed Item pipeline to use spiders instead of domains
- The
domain
argument ofprocess_item()
item pipeline method was changed tospider
, the new signature is:process_item(spider, item)
(1827
| #105) - To quickly port your code (to work with Scrapy 0.8) just use
spider.domain_name
where you previously useddomain
.
- The
- Changed Stats API to use spiders instead of domains (
1849
| #113) StatsCollector
was changed to receive spider references (instead of domains) in its methods (set_value
,inc_value
, etc).- added
StatsCollector.iter_spider_stats()
method - removed
StatsCollector.list_domains()
method - Also, Stats signals were renamed and now pass around spider references (instead of domains). Here's a summary of the changes:
- To quickly port your code (to work with Scrapy 0.8) just use
spider.domain_name
where you previously useddomain
.spider_stats
contains exactly the same data asdomain_stats
.
- Changed Stats API to use spiders instead of domains (
CloseDomain
extension moved toscrapy.contrib.closespider.CloseSpider
(1833
)- Its settings were also renamed:
CLOSEDOMAIN_TIMEOUT
toCLOSESPIDER_TIMEOUT
CLOSEDOMAIN_ITEMCOUNT
toCLOSESPIDER_ITEMCOUNT
- Removed deprecated
SCRAPYSETTINGS_MODULE
environment variable - useSCRAPY_SETTINGS_MODULE
instead (1840
) - Renamed setting:
REQUESTS_PER_DOMAIN
toCONCURRENT_REQUESTS_PER_SPIDER
(1830
,1844
) - Renamed setting:
CONCURRENT_DOMAINS
toCONCURRENT_SPIDERS
(1830
) - Refactored HTTP Cache middleware
- HTTP Cache middleware has been heavily refactored, retaining the same functionality except for the domain sectorization which was removed. (
1843
) - Renamed exception:
DontCloseDomain
toDontCloseSpider
(1859
| #120) - Renamed extension:
DelayedCloseDomain
toSpiderCloseDelay
(1861
| #121) - Removed obsolete
scrapy.utils.markup.remove_escape_chars
function - usescrapy.utils.markup.replace_escape_chars
instead (1865
)
First release of Scrapy.