Skip to content

Commit 8d5dc2f

Browse files
jgunthorpegregkh
authored andcommitted
RDMA/umem: Prevent small pages from being returned by ib_umem_find_best_pgsz()
[ Upstream commit 10c75cc ] rdma_for_each_block() makes assumptions about how the SGL is constructed that don't work if the block size is below the page size used to to build the SGL. The rules for umem SGL construction require that the SG's all be PAGE_SIZE aligned and we don't encode the actual byte offset of the VA range inside the SGL using offset and length. So rdma_for_each_block() has no idea where the actual starting/ending point is to compute the first/last block boundary if the starting address should be within a SGL. Fixing the SGL construction turns out to be really hard, and will be the subject of other patches. For now block smaller pages. Fixes: 4a35339 ("RDMA/umem: Add API to find best driver supported page size in an MR") Link: https://lore.kernel.org/r/2-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
1 parent 488229b commit 8d5dc2f

1 file changed

Lines changed: 6 additions & 0 deletions

File tree

drivers/infiniband/core/umem.c

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -151,6 +151,12 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
151151
dma_addr_t mask;
152152
int i;
153153

154+
/* rdma_for_each_block() has a bug if the page size is smaller than the
155+
* page size used to build the umem. For now prevent smaller page sizes
156+
* from being returned.
157+
*/
158+
pgsz_bitmap &= GENMASK(BITS_PER_LONG - 1, PAGE_SHIFT);
159+
154160
/* At minimum, drivers must support PAGE_SIZE or smaller */
155161
if (WARN_ON(!(pgsz_bitmap & GENMASK(PAGE_SHIFT, 0))))
156162
return 0;

0 commit comments

Comments
 (0)