Skip to content

Commit 96fda68

Browse files
Merge pull request #10 from commoncrawl/laurie/gzip
docs: propagate explanation about gzip to java
2 parents 1e5ac92 + 5d06ceb commit 96fda68

1 file changed

Lines changed: 7 additions & 5 deletions

File tree

README.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ In the Whirlwind Tour, we will:
2020
1) explore the WARC, WET and WAT file formats used to store Common Crawl's data.
2121
2) play with some useful Java libraries for interacting with the data: [jwarc](https://github.com/iipc/jwarc), TBA if needed
2222
and [duckdb](https://duckdb.org/).
23-
3) learn about how the data is compressed to allow random access.
23+
3) learn about how the data is compressed in an unusual way to allow random access.
2424
4) use the CDXJ index and the columnar index to access the data we want.
2525

2626
**Prerequisites:** To get the most out of this tour, you should be comfortable with Maven, running commands on the command line, and basic SQL. Some knowledge of HTTP requests and HTML is also helpful but not essential. We assume you have [make](https://www.gnu.org/software/make/) and [Maven](https://maven.apache.org/) installed.
@@ -240,11 +240,13 @@ The JSON blob has enough information to cleanly isolate the raw data of a single
240240

241241
## Task 4: Use the CDXJ index to extract a subset of raw content from the local WARC, WET, and WAT
242242

243-
Normally, compressed files aren't random access. However, the WARC files use a trick to make this possible, which is that every record needs to be separately compressed. The `gzip` compression utility supports this, but it's rarely used.
243+
Normally, compressed files aren't random access -- if you want to read the content near the end of a compressed file, you have to decompress everything up to the content that you actually want. This would make fetching a subset of the data very expensive.
244244

245-
To extract one record from a warc file, all you need to know is the filename and the offset into the file. If you're reading over the web, then it really helps to know the exact length of the record.
245+
Instead of normal whole-file compression, WARC files use "one weird trick" -- two gzipped files concatenated together are a valid gzip file. And if you know the byte offset of the second file, you can seek to that offset and then ungzip just the second file's contents.
246246

247-
Run:
247+
WARC.gz files do this trick for every WARC record. When reading, the CDXJ index (that we built in Task 3) contains the byte offsets and lengths for every record.
248+
249+
Let's extract some individual records from our warc.gz files. Run:
248250

249251
```make extract```
250252

@@ -271,7 +273,7 @@ Notice that we extracted HTML from the WARC, text from WET, and JSON from the WA
271273

272274
## Task 5: Wreck the WARC by compressing it wrong
273275

274-
As mentioned earlier, WARC/WET/WAT files look like they're gzipped, but they're actually gzipped in a particular way that allows random access. This means that you can't `gunzip` and then `gzip` a warc without wrecking random access. This example:
276+
As mentioned earlier, WARC/WET/WAT files look like they're normal gzipped files, but they're actually gzipped in a particular way that allows random access. This means that you can't `gunzip` and then `gzip` a warc without wrecking random access. This example:
275277

276278
* creates a copy of one of the warc files in the repo
277279
* uncompresses it

0 commit comments

Comments
 (0)