Skip to content

Commit 488229b

Browse files
jgunthorpegregkh
authored andcommitted
RDMA/umem: Fix ib_umem_find_best_pgsz() for mappings that cross a page boundary
[ Upstream commit a40c20d ] It is possible for a single SGL to span an aligned boundary, eg if the SGL is 61440 -> 90112 Then the length is 28672, which currently limits the block size to 32k. With a 32k page size the two covering blocks will be: 32768->65536 and 65536->98304 However, the correct answer is a 128K block size which will span the whole 28672 bytes in a single block. Instead of limiting based on length figure out which high IOVA bits don't change between the start and end addresses. That is the highest useful page size. Fixes: 4a35339 ("RDMA/umem: Add API to find best driver supported page size in an MR") Link: https://lore.kernel.org/r/1-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
1 parent 09aa2ec commit 488229b

1 file changed

Lines changed: 7 additions & 2 deletions

File tree

drivers/infiniband/core/umem.c

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -156,8 +156,13 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
156156
return 0;
157157

158158
va = virt;
159-
/* max page size not to exceed MR length */
160-
mask = roundup_pow_of_two(umem->length);
159+
/* The best result is the smallest page size that results in the minimum
160+
* number of required pages. Compute the largest page size that could
161+
* work based on VA address bits that don't change.
162+
*/
163+
mask = pgsz_bitmap &
164+
GENMASK(BITS_PER_LONG - 1,
165+
bits_per((umem->length - 1 + virt) ^ virt));
161166
/* offset into first SGL */
162167
pgoff = umem->address & ~PAGE_MASK;
163168

0 commit comments

Comments
 (0)