Skip to content

Commit 150e224

Browse files
"many batches" -> "mini-batches" (#402)
1 parent 2f010aa commit 150e224

1 file changed

Lines changed: 2 additions & 2 deletions

File tree

04_mnist_basics.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4180,7 +4180,7 @@
41804180
"\n",
41814181
"As we saw in our discussion of data augmentation in <<chapter_production>>, we get better generalization if we can vary things during training. One simple and effective thing we can vary is what data items we put in each mini-batch. Rather than simply enumerating our dataset in order for every epoch, instead what we normally do is randomly shuffle it on every epoch, before we create mini-batches. PyTorch and fastai provide a class that will do the shuffling and mini-batch collation for you, called `DataLoader`.\n",
41824182
"\n",
4183-
"A `DataLoader` can take any Python collection and turn it into an iterator over many batches, like so:"
4183+
"A `DataLoader` can take any Python collection and turn it into an iterator over mini-batches, like so:"
41844184
]
41854185
},
41864186
{
@@ -4239,7 +4239,7 @@
42394239
"cell_type": "markdown",
42404240
"metadata": {},
42414241
"source": [
4242-
"When we pass a `Dataset` to a `DataLoader` we will get back many batches which are themselves tuples of tensors representing batches of independent and dependent variables:"
4242+
"When we pass a `Dataset` to a `DataLoader` we will get back mini-batches which are themselves tuples of tensors representing batches of independent and dependent variables:"
42434243
]
42444244
},
42454245
{

0 commit comments

Comments
 (0)