You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Task 6: Use cdx_toolkit to query the full CDX index and download those captures from AWS S3
336
+
## Task 6: Query the full CDX index and download those captures from AWS S3
337
337
338
338
Some of our users only want to download a small subset of the crawl. They want to run queries against an index, either the CDX index we just talked about, or in the columnar index, which we'll talk about later.
339
339
340
-
The [cdx_toolkit](https://github.com/cocrawler/cdx_toolkit) is a set of tools for working with CDX indices of web crawls and archives. It knows how to query the CDX index across all of our crawls and also can create WARCs of just the records you want. We will fetch the same record from Wikipedia that we've been using for the whirlwind tour.
340
+
The CDX index is documented [here](https://github.com/webrecorder/pywb/wiki/CDX-Server-API#api-reference) and can be accessed through a HTTP API.
341
+
342
+
There is a complete python package called
343
+
344
+
Right now there is no specific tool in Java for query the CDX index, nevertheless, we do have a very useful Python tool for working with the CDX index: [cdx_toolkit](https://github.com/cocrawler/cdx_toolkit). Please refer to the Python Whirlwind Tour for more details.
345
+
346
+
In this task we will achieve the same results using direct HTTP API calls and JWARC.
341
347
342
348
Run
343
349
344
-
```make cdx_toolkit```
350
+
```make query_cdx```
345
351
346
352
The output looks like this:
347
353
@@ -350,24 +356,25 @@ The output looks like this:
350
356
351
357
```
352
358
demonstrate that we have this entry in the index
353
-
cdxt --crawl CC-MAIN-2024-22 --from 20240518015810 --to 20240518015810 iter an.wikipedia.org/wiki/Escopete
354
-
status 200, timestamp 20240518015810, url https://an.wikipedia.org/wiki/Escopete
@@ -376,22 +383,28 @@ There's a lot going on here so let's unpack it a little.
376
383
377
384
#### Check that the crawl has a record for the page we are interested in
378
385
379
-
We check for capture results using the `cdxt` command `iter`, specifying the exact URL `an.wikipedia.org/wiki/Escopete` and the timestamp range `--from 20240518015810 --to 20240518015810`. The result of this tells us that the crawl successfuly fetched this page at timestamp `20240518015810`.
386
+
We check for capture results querying the index.commoncrawl.org with GET parameters, specifying the crawl (`CC-MAIN-2024-22-index`), the exact URL `an.wikipedia.org/wiki/Escopete` and the timestamp range `from=20240518015810` and `to=20240518015810`.
387
+
The result of this tells us that the crawl successfully fetched this page at timestamp `20240518015810`.
380
388
* Captures are named by the surtkey and the time.
381
-
* Instead of `--crawl CC-MAIN-2024-22`, you could pass `--cc` to search across all crawls.
382
-
* You can pass `--limit <N>` to limit the number of results returned - in this case because we have restricted the timestamp range to a single value, we only expect one result.
389
+
390
+
[//]: #(* If you need to search across all crawls, of `--crawl CC-MAIN-2024-22`, you could pass `--cc` to search across all crawls.)
391
+
[//]: #(Here I'm tempted to mention that you should use the columnar index for this kind of operations, however cdx_toolkit iterate over all crawls when called with -cc, if I'm not wrong)
392
+
* You can use the parameter `limit=<N>` to limit the number of results returned - in this case because we have restricted the timestamp range to a single value, we only expect one result.
383
393
* URLs may be specified with wildcards to return even more results: `"an.wikipedia.org/wiki/Escop*"` matches `an.wikipedia.org/wiki/Escopulión` and `an.wikipedia.org/wiki/Escopete`.
384
394
385
395
#### Retrieve the fetched content as WARC
386
396
387
-
Next, we use the `cdxt` command `warc` to retrieve the content and save it locally as a new WARC file, again specifying the exact URL, crawl identifier, and timestamp range. This creates the WARC file `TEST-000000.extracted.warc.gz` which contains a `warcinfo` record explaining what the WARC is, followed by the `response` record we requested.
388
-
* If you dig into cdx_toolkit's code, you'll find that it is using the offset and length of the WARC record (as returned by the CDX index query) to make a HTTP byte range request to S3 that isolates and returns just the single record we want from the full file. It only downloads the response WARC record because our CDX index only has the response records indexed.
389
-
* By default `cdxt` avoids overwriting existing files by automatically incrementing the counter in the filename. If you run this again without deleting `TEST-000000.extracted.warc.gz`, the data will be written again to a new file `TEST-000001.extracted.warc.gz`.
390
-
* Limit, timestamp, and crawl index args, as well as URL wildcards, work as for `iter`.
397
+
Next, we make another HTTP call to retrieve the content and save it locally as a new WARC file, again specifying the exact URL, crawl identifier, and timestamp range.
398
+
This creates the WARC file `TEST-000000.extracted.warc.gz`
399
+
400
+
[//]: #(Here there is no warcinfo when getting from data.commoncrawl.org, right?)
401
+
[//]: #(which contains a `warcinfo` record explaining what the WARC is, followed by the `response` record we requested. )
402
+
* If you check the cURL command, you'll find that it is using the offset and length of the WARC record (as returned by the CDX index query) to make an HTTP byte range request to `data.commoncrawl.org` that isolates and returns just the single record we want from the full file. It only downloads the response WARC record because our CDX index only has the response records indexed.
403
+
* Limit, timestamp, and crawl index parameters, as well as URL wildcards.
391
404
392
405
### Indexing the WARC and viewing its contents
393
406
394
-
Finally, we run `cdxj-indexer` on this new WARC to make a CDXJ index of it as in Task 3, and then iterate over the WARC using `warcio-iterator.py` as in Task 2.
407
+
Finally, we run `jwarc cdxj` that process the WARC to make a CDXJ index of it as in Task 3, and then list the records using `jwarc ls` as in Task 2.
395
408
396
409
## Task 7: Find the right part of the columnar index
0 commit comments