Skip to content

Commit 14cebf6

Browse files
willdeaconChristoph Hellwig
authored and
Christoph Hellwig
committed
swiotlb: Reinstate page-alignment for mappings >= PAGE_SIZE
For swiotlb allocations >= PAGE_SIZE, the slab search historically adjusted the stride to avoid checking unaligned slots. This had the side-effect of aligning large mapping requests to PAGE_SIZE, but that was broken by 0eee5ae ("swiotlb: fix slot alignment checks"). Since this alignment could be relied upon drivers, reinstate PAGE_SIZE alignment for swiotlb mappings >= PAGE_SIZE. Reported-by: Michael Kelley <[email protected]> Signed-off-by: Will Deacon <[email protected]> Reviewed-by: Robin Murphy <[email protected]> Reviewed-by: Petr Tesarik <[email protected]> Tested-by: Nicolin Chen <[email protected]> Tested-by: Michael Kelley <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
1 parent afc5aa4 commit 14cebf6

File tree

1 file changed

+11
-7
lines changed

1 file changed

+11
-7
lines changed

kernel/dma/swiotlb.c

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1014,6 +1014,17 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
10141014
BUG_ON(!nslots);
10151015
BUG_ON(area_index >= pool->nareas);
10161016

1017+
/*
1018+
* Historically, swiotlb allocations >= PAGE_SIZE were guaranteed to be
1019+
* page-aligned in the absence of any other alignment requirements.
1020+
* 'alloc_align_mask' was later introduced to specify the alignment
1021+
* explicitly, however this is passed as zero for streaming mappings
1022+
* and so we preserve the old behaviour there in case any drivers are
1023+
* relying on it.
1024+
*/
1025+
if (!alloc_align_mask && !iotlb_align_mask && alloc_size >= PAGE_SIZE)
1026+
alloc_align_mask = PAGE_SIZE - 1;
1027+
10171028
/*
10181029
* Ensure that the allocation is at least slot-aligned and update
10191030
* 'iotlb_align_mask' to ignore bits that will be preserved when
@@ -1028,13 +1039,6 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
10281039
*/
10291040
stride = get_max_slots(max(alloc_align_mask, iotlb_align_mask));
10301041

1031-
/*
1032-
* For allocations of PAGE_SIZE or larger only look for page aligned
1033-
* allocations.
1034-
*/
1035-
if (alloc_size >= PAGE_SIZE)
1036-
stride = umax(stride, PAGE_SHIFT - IO_TLB_SHIFT + 1);
1037-
10381042
spin_lock_irqsave(&area->lock, flags);
10391043
if (unlikely(nslots > pool->area_nslabs - area->used))
10401044
goto not_found;

0 commit comments

Comments
 (0)