Skip to content

Commit 643ad15

Browse files
committed
Merge branch 'mm-pkeys-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 protection key support from Ingo Molnar: "This tree adds support for a new memory protection hardware feature that is available in upcoming Intel CPUs: 'protection keys' (pkeys). There's a background article at LWN.net: https://lwn.net/Articles/643797/ The gist is that protection keys allow the encoding of user-controllable permission masks in the pte. So instead of having a fixed protection mask in the pte (which needs a system call to change and works on a per page basis), the user can map a (handful of) protection mask variants and can change the masks runtime relatively cheaply, without having to change every single page in the affected virtual memory range. This allows the dynamic switching of the protection bits of large amounts of virtual memory, via user-space instructions. It also allows more precise control of MMU permission bits: for example the executable bit is separate from the read bit (see more about that below). This tree adds the MM infrastructure and low level x86 glue needed for that, plus it adds a high level API to make use of protection keys - if a user-space application calls: mmap(..., PROT_EXEC); or mprotect(ptr, sz, PROT_EXEC); (note PROT_EXEC-only, without PROT_READ/WRITE), the kernel will notice this special case, and will set a special protection key on this memory range. It also sets the appropriate bits in the Protection Keys User Rights (PKRU) register so that the memory becomes unreadable and unwritable. So using protection keys the kernel is able to implement 'true' PROT_EXEC on x86 CPUs: without protection keys PROT_EXEC implies PROT_READ as well. Unreadable executable mappings have security advantages: they cannot be read via information leaks to figure out ASLR details, nor can they be scanned for ROP gadgets - and they cannot be used by exploits for data purposes either. We know about no user-space code that relies on pure PROT_EXEC mappings today, but binary loaders could start making use of this new feature to map binaries and libraries in a more secure fashion. There is other pending pkeys work that offers more high level system call APIs to manage protection keys - but those are not part of this pull request. Right now there's a Kconfig that controls this feature (CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) that is default enabled (like most x86 CPU feature enablement code that has no runtime overhead), but it's not user-configurable at the moment. If there's any serious problem with this then we can make it configurable and/or flip the default" * 'mm-pkeys-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (38 commits) x86/mm/pkeys: Fix mismerge of protection keys CPUID bits mm/pkeys: Fix siginfo ABI breakage caused by new u64 field x86/mm/pkeys: Fix access_error() denial of writes to write-only VMA mm/core, x86/mm/pkeys: Add execute-only protection keys support x86/mm/pkeys: Create an x86 arch_calc_vm_prot_bits() for VMA flags x86/mm/pkeys: Allow kernel to modify user pkey rights register x86/fpu: Allow setting of XSAVE state x86/mm: Factor out LDT init from context init mm/core, x86/mm/pkeys: Add arch_validate_pkey() mm/core, arch, powerpc: Pass a protection key in to calc_vm_flag_bits() x86/mm/pkeys: Actually enable Memory Protection Keys in the CPU x86/mm/pkeys: Add Kconfig prompt to existing config option x86/mm/pkeys: Dump pkey from VMA in /proc/pid/smaps x86/mm/pkeys: Dump PKRU with other kernel registers mm/core, x86/mm/pkeys: Differentiate instruction fetches x86/mm/pkeys: Optimize fault handling in access_error() mm/core: Do not enforce PKEY permissions on remote mm access um, pkeys: Add UML arch_*_access_permitted() methods mm/gup, x86/mm/pkeys: Check VMAs and PTEs for protection keys x86/mm/gup: Simplify get_user_pages() PTE bit handling ...
2 parents 24b5e20 + 0d47638 commit 643ad15

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

85 files changed

+1406
-241
lines changed

Documentation/kernel-parameters.txt

+3
Original file line numberDiff line numberDiff line change
@@ -987,6 +987,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
987987
See Documentation/x86/intel_mpx.txt for more
988988
information about the feature.
989989

990+
nopku [X86] Disable Memory Protection Keys CPU feature found
991+
in some Intel CPUs.
992+
990993
eagerfpu= [X86]
991994
on enable eager fpu restore
992995
off disable eager fpu restore

arch/cris/arch-v32/drivers/cryptocop.c

+2-6
Original file line numberDiff line numberDiff line change
@@ -2719,9 +2719,7 @@ static int cryptocop_ioctl_process(struct inode *inode, struct file *filp, unsig
27192719
/* Acquire the mm page semaphore. */
27202720
down_read(&current->mm->mmap_sem);
27212721

2722-
err = get_user_pages(current,
2723-
current->mm,
2724-
(unsigned long int)(oper.indata + prev_ix),
2722+
err = get_user_pages((unsigned long int)(oper.indata + prev_ix),
27252723
noinpages,
27262724
0, /* read access only for in data */
27272725
0, /* no force */
@@ -2736,9 +2734,7 @@ static int cryptocop_ioctl_process(struct inode *inode, struct file *filp, unsig
27362734
}
27372735
noinpages = err;
27382736
if (oper.do_cipher){
2739-
err = get_user_pages(current,
2740-
current->mm,
2741-
(unsigned long int)oper.cipher_outdata,
2737+
err = get_user_pages((unsigned long int)oper.cipher_outdata,
27422738
nooutpages,
27432739
1, /* write access for out data */
27442740
0, /* no force */

arch/ia64/include/uapi/asm/siginfo.h

+9-4
Original file line numberDiff line numberDiff line change
@@ -63,10 +63,15 @@ typedef struct siginfo {
6363
unsigned int _flags; /* see below */
6464
unsigned long _isr; /* isr */
6565
short _addr_lsb; /* lsb of faulting address */
66-
struct {
67-
void __user *_lower;
68-
void __user *_upper;
69-
} _addr_bnd;
66+
union {
67+
/* used when si_code=SEGV_BNDERR */
68+
struct {
69+
void __user *_lower;
70+
void __user *_upper;
71+
} _addr_bnd;
72+
/* used when si_code=SEGV_PKUERR */
73+
__u32 _pkey;
74+
};
7075
} _sigfault;
7176

7277
/* SIGPOLL */

arch/ia64/kernel/err_inject.c

+1-2
Original file line numberDiff line numberDiff line change
@@ -142,8 +142,7 @@ store_virtual_to_phys(struct device *dev, struct device_attribute *attr,
142142
u64 virt_addr=simple_strtoull(buf, NULL, 16);
143143
int ret;
144144

145-
ret = get_user_pages(current, current->mm, virt_addr,
146-
1, VM_READ, 0, NULL, NULL);
145+
ret = get_user_pages(virt_addr, 1, VM_READ, 0, NULL, NULL);
147146
if (ret<=0) {
148147
#ifdef ERR_INJ_DEBUG
149148
printk("Virtual address %lx is not existing.\n",virt_addr);

arch/mips/include/uapi/asm/siginfo.h

+9-4
Original file line numberDiff line numberDiff line change
@@ -86,10 +86,15 @@ typedef struct siginfo {
8686
int _trapno; /* TRAP # which caused the signal */
8787
#endif
8888
short _addr_lsb;
89-
struct {
90-
void __user *_lower;
91-
void __user *_upper;
92-
} _addr_bnd;
89+
union {
90+
/* used when si_code=SEGV_BNDERR */
91+
struct {
92+
void __user *_lower;
93+
void __user *_upper;
94+
} _addr_bnd;
95+
/* used when si_code=SEGV_PKUERR */
96+
__u32 _pkey;
97+
};
9398
} _sigfault;
9499

95100
/* SIGPOLL, SIGXFSZ (To do ...) */

arch/mips/mm/gup.c

+1-2
Original file line numberDiff line numberDiff line change
@@ -286,8 +286,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write,
286286
start += nr << PAGE_SHIFT;
287287
pages += nr;
288288

289-
ret = get_user_pages_unlocked(current, mm, start,
290-
(end - start) >> PAGE_SHIFT,
289+
ret = get_user_pages_unlocked(start, (end - start) >> PAGE_SHIFT,
291290
write, 0, pages);
292291

293292
/* Have to be a bit careful with return values */

arch/powerpc/include/asm/mman.h

+3-2
Original file line numberDiff line numberDiff line change
@@ -18,11 +18,12 @@
1818
* This file is included by linux/mman.h, so we can't use cacl_vm_prot_bits()
1919
* here. How important is the optimization?
2020
*/
21-
static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot)
21+
static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
22+
unsigned long pkey)
2223
{
2324
return (prot & PROT_SAO) ? VM_SAO : 0;
2425
}
25-
#define arch_calc_vm_prot_bits(prot) arch_calc_vm_prot_bits(prot)
26+
#define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)
2627

2728
static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
2829
{

arch/powerpc/include/asm/mmu_context.h

+12
Original file line numberDiff line numberDiff line change
@@ -148,5 +148,17 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
148148
{
149149
}
150150

151+
static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
152+
bool write, bool execute, bool foreign)
153+
{
154+
/* by default, allow everything */
155+
return true;
156+
}
157+
158+
static inline bool arch_pte_access_permitted(pte_t pte, bool write)
159+
{
160+
/* by default, allow everything */
161+
return true;
162+
}
151163
#endif /* __KERNEL__ */
152164
#endif /* __ASM_POWERPC_MMU_CONTEXT_H */

arch/s390/include/asm/mmu_context.h

+12
Original file line numberDiff line numberDiff line change
@@ -136,4 +136,16 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
136136
{
137137
}
138138

139+
static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
140+
bool write, bool execute, bool foreign)
141+
{
142+
/* by default, allow everything */
143+
return true;
144+
}
145+
146+
static inline bool arch_pte_access_permitted(pte_t pte, bool write)
147+
{
148+
/* by default, allow everything */
149+
return true;
150+
}
139151
#endif /* __S390_MMU_CONTEXT_H */

arch/s390/mm/gup.c

+1-3
Original file line numberDiff line numberDiff line change
@@ -210,7 +210,6 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
210210
int get_user_pages_fast(unsigned long start, int nr_pages, int write,
211211
struct page **pages)
212212
{
213-
struct mm_struct *mm = current->mm;
214213
int nr, ret;
215214

216215
might_sleep();
@@ -222,8 +221,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write,
222221
/* Try to get the remaining pages with get_user_pages */
223222
start += nr << PAGE_SHIFT;
224223
pages += nr;
225-
ret = get_user_pages_unlocked(current, mm, start,
226-
nr_pages - nr, write, 0, pages);
224+
ret = get_user_pages_unlocked(start, nr_pages - nr, write, 0, pages);
227225
/* Have to be a bit careful with return values */
228226
if (nr > 0)
229227
ret = (ret < 0) ? nr : ret + nr;

arch/sh/mm/gup.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -257,7 +257,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write,
257257
start += nr << PAGE_SHIFT;
258258
pages += nr;
259259

260-
ret = get_user_pages_unlocked(current, mm, start,
260+
ret = get_user_pages_unlocked(start,
261261
(end - start) >> PAGE_SHIFT, write, 0, pages);
262262

263263
/* Have to be a bit careful with return values */

arch/sparc/mm/gup.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -237,7 +237,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write,
237237
start += nr << PAGE_SHIFT;
238238
pages += nr;
239239

240-
ret = get_user_pages_unlocked(current, mm, start,
240+
ret = get_user_pages_unlocked(start,
241241
(end - start) >> PAGE_SHIFT, write, 0, pages);
242242

243243
/* Have to be a bit careful with return values */

arch/um/include/asm/mmu_context.h

+14
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,20 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
2727
struct vm_area_struct *vma)
2828
{
2929
}
30+
31+
static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
32+
bool write, bool execute, bool foreign)
33+
{
34+
/* by default, allow everything */
35+
return true;
36+
}
37+
38+
static inline bool arch_pte_access_permitted(pte_t pte, bool write)
39+
{
40+
/* by default, allow everything */
41+
return true;
42+
}
43+
3044
/*
3145
* end asm-generic/mm_hooks.h functions
3246
*/

arch/unicore32/include/asm/mmu_context.h

+12
Original file line numberDiff line numberDiff line change
@@ -97,4 +97,16 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
9797
{
9898
}
9999

100+
static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
101+
bool write, bool foreign)
102+
{
103+
/* by default, allow everything */
104+
return true;
105+
}
106+
107+
static inline bool arch_pte_access_permitted(pte_t pte, bool write)
108+
{
109+
/* by default, allow everything */
110+
return true;
111+
}
100112
#endif

arch/x86/Kconfig

+16
Original file line numberDiff line numberDiff line change
@@ -156,6 +156,8 @@ config X86
156156
select X86_DEV_DMA_OPS if X86_64
157157
select X86_FEATURE_NAMES if PROC_FS
158158
select HAVE_STACK_VALIDATION if X86_64
159+
select ARCH_USES_HIGH_VMA_FLAGS if X86_INTEL_MEMORY_PROTECTION_KEYS
160+
select ARCH_HAS_PKEYS if X86_INTEL_MEMORY_PROTECTION_KEYS
159161

160162
config INSTRUCTION_DECODER
161163
def_bool y
@@ -1719,6 +1721,20 @@ config X86_INTEL_MPX
17191721

17201722
If unsure, say N.
17211723

1724+
config X86_INTEL_MEMORY_PROTECTION_KEYS
1725+
prompt "Intel Memory Protection Keys"
1726+
def_bool y
1727+
# Note: only available in 64-bit mode
1728+
depends on CPU_SUP_INTEL && X86_64
1729+
---help---
1730+
Memory Protection Keys provides a mechanism for enforcing
1731+
page-based protections, but without requiring modification of the
1732+
page tables when an application changes protection domains.
1733+
1734+
For details, see Documentation/x86/protection-keys.txt
1735+
1736+
If unsure, say y.
1737+
17221738
config EFI
17231739
bool "EFI runtime service support"
17241740
depends on ACPI

arch/x86/include/asm/cpufeature.h

+35-20
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ enum cpuid_leafs
2626
CPUID_8000_0008_EBX,
2727
CPUID_6_EAX,
2828
CPUID_8000_000A_EDX,
29+
CPUID_7_ECX,
2930
};
3031

3132
#ifdef CONFIG_X86_FEATURE_NAMES
@@ -48,28 +49,42 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
4849
test_bit(bit, (unsigned long *)((c)->x86_capability))
4950

5051
#define REQUIRED_MASK_BIT_SET(bit) \
51-
( (((bit)>>5)==0 && (1UL<<((bit)&31) & REQUIRED_MASK0)) || \
52-
(((bit)>>5)==1 && (1UL<<((bit)&31) & REQUIRED_MASK1)) || \
53-
(((bit)>>5)==2 && (1UL<<((bit)&31) & REQUIRED_MASK2)) || \
54-
(((bit)>>5)==3 && (1UL<<((bit)&31) & REQUIRED_MASK3)) || \
55-
(((bit)>>5)==4 && (1UL<<((bit)&31) & REQUIRED_MASK4)) || \
56-
(((bit)>>5)==5 && (1UL<<((bit)&31) & REQUIRED_MASK5)) || \
57-
(((bit)>>5)==6 && (1UL<<((bit)&31) & REQUIRED_MASK6)) || \
58-
(((bit)>>5)==7 && (1UL<<((bit)&31) & REQUIRED_MASK7)) || \
59-
(((bit)>>5)==8 && (1UL<<((bit)&31) & REQUIRED_MASK8)) || \
60-
(((bit)>>5)==9 && (1UL<<((bit)&31) & REQUIRED_MASK9)) )
52+
( (((bit)>>5)==0 && (1UL<<((bit)&31) & REQUIRED_MASK0 )) || \
53+
(((bit)>>5)==1 && (1UL<<((bit)&31) & REQUIRED_MASK1 )) || \
54+
(((bit)>>5)==2 && (1UL<<((bit)&31) & REQUIRED_MASK2 )) || \
55+
(((bit)>>5)==3 && (1UL<<((bit)&31) & REQUIRED_MASK3 )) || \
56+
(((bit)>>5)==4 && (1UL<<((bit)&31) & REQUIRED_MASK4 )) || \
57+
(((bit)>>5)==5 && (1UL<<((bit)&31) & REQUIRED_MASK5 )) || \
58+
(((bit)>>5)==6 && (1UL<<((bit)&31) & REQUIRED_MASK6 )) || \
59+
(((bit)>>5)==7 && (1UL<<((bit)&31) & REQUIRED_MASK7 )) || \
60+
(((bit)>>5)==8 && (1UL<<((bit)&31) & REQUIRED_MASK8 )) || \
61+
(((bit)>>5)==9 && (1UL<<((bit)&31) & REQUIRED_MASK9 )) || \
62+
(((bit)>>5)==10 && (1UL<<((bit)&31) & REQUIRED_MASK10)) || \
63+
(((bit)>>5)==11 && (1UL<<((bit)&31) & REQUIRED_MASK11)) || \
64+
(((bit)>>5)==12 && (1UL<<((bit)&31) & REQUIRED_MASK12)) || \
65+
(((bit)>>5)==13 && (1UL<<((bit)&31) & REQUIRED_MASK13)) || \
66+
(((bit)>>5)==13 && (1UL<<((bit)&31) & REQUIRED_MASK14)) || \
67+
(((bit)>>5)==13 && (1UL<<((bit)&31) & REQUIRED_MASK15)) || \
68+
(((bit)>>5)==14 && (1UL<<((bit)&31) & REQUIRED_MASK16)) )
6169

6270
#define DISABLED_MASK_BIT_SET(bit) \
63-
( (((bit)>>5)==0 && (1UL<<((bit)&31) & DISABLED_MASK0)) || \
64-
(((bit)>>5)==1 && (1UL<<((bit)&31) & DISABLED_MASK1)) || \
65-
(((bit)>>5)==2 && (1UL<<((bit)&31) & DISABLED_MASK2)) || \
66-
(((bit)>>5)==3 && (1UL<<((bit)&31) & DISABLED_MASK3)) || \
67-
(((bit)>>5)==4 && (1UL<<((bit)&31) & DISABLED_MASK4)) || \
68-
(((bit)>>5)==5 && (1UL<<((bit)&31) & DISABLED_MASK5)) || \
69-
(((bit)>>5)==6 && (1UL<<((bit)&31) & DISABLED_MASK6)) || \
70-
(((bit)>>5)==7 && (1UL<<((bit)&31) & DISABLED_MASK7)) || \
71-
(((bit)>>5)==8 && (1UL<<((bit)&31) & DISABLED_MASK8)) || \
72-
(((bit)>>5)==9 && (1UL<<((bit)&31) & DISABLED_MASK9)) )
71+
( (((bit)>>5)==0 && (1UL<<((bit)&31) & DISABLED_MASK0 )) || \
72+
(((bit)>>5)==1 && (1UL<<((bit)&31) & DISABLED_MASK1 )) || \
73+
(((bit)>>5)==2 && (1UL<<((bit)&31) & DISABLED_MASK2 )) || \
74+
(((bit)>>5)==3 && (1UL<<((bit)&31) & DISABLED_MASK3 )) || \
75+
(((bit)>>5)==4 && (1UL<<((bit)&31) & DISABLED_MASK4 )) || \
76+
(((bit)>>5)==5 && (1UL<<((bit)&31) & DISABLED_MASK5 )) || \
77+
(((bit)>>5)==6 && (1UL<<((bit)&31) & DISABLED_MASK6 )) || \
78+
(((bit)>>5)==7 && (1UL<<((bit)&31) & DISABLED_MASK7 )) || \
79+
(((bit)>>5)==8 && (1UL<<((bit)&31) & DISABLED_MASK8 )) || \
80+
(((bit)>>5)==9 && (1UL<<((bit)&31) & DISABLED_MASK9 )) || \
81+
(((bit)>>5)==10 && (1UL<<((bit)&31) & DISABLED_MASK10)) || \
82+
(((bit)>>5)==11 && (1UL<<((bit)&31) & DISABLED_MASK11)) || \
83+
(((bit)>>5)==12 && (1UL<<((bit)&31) & DISABLED_MASK12)) || \
84+
(((bit)>>5)==13 && (1UL<<((bit)&31) & DISABLED_MASK13)) || \
85+
(((bit)>>5)==13 && (1UL<<((bit)&31) & DISABLED_MASK14)) || \
86+
(((bit)>>5)==13 && (1UL<<((bit)&31) & DISABLED_MASK15)) || \
87+
(((bit)>>5)==14 && (1UL<<((bit)&31) & DISABLED_MASK16)) )
7388

7489
#define cpu_has(c, bit) \
7590
(__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \

arch/x86/include/asm/cpufeatures.h

+5-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
/*
1313
* Defines x86 CPU feature bits
1414
*/
15-
#define NCAPINTS 16 /* N 32-bit words worth of info */
15+
#define NCAPINTS 17 /* N 32-bit words worth of info */
1616
#define NBUGINTS 1 /* N 32-bit bug flags */
1717

1818
/*
@@ -274,6 +274,10 @@
274274
#define X86_FEATURE_PFTHRESHOLD (15*32+12) /* pause filter threshold */
275275
#define X86_FEATURE_AVIC (15*32+13) /* Virtual Interrupt Controller */
276276

277+
/* Intel-defined CPU features, CPUID level 0x00000007:0 (ecx), word 16 */
278+
#define X86_FEATURE_PKU (16*32+ 3) /* Protection Keys for Userspace */
279+
#define X86_FEATURE_OSPKE (16*32+ 4) /* OS Protection Keys Enable */
280+
277281
/*
278282
* BUG word(s)
279283
*/

arch/x86/include/asm/disabled-features.h

+15
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,14 @@
2828
# define DISABLE_CENTAUR_MCR 0
2929
#endif /* CONFIG_X86_64 */
3030

31+
#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
32+
# define DISABLE_PKU (1<<(X86_FEATURE_PKU))
33+
# define DISABLE_OSPKE (1<<(X86_FEATURE_OSPKE))
34+
#else
35+
# define DISABLE_PKU 0
36+
# define DISABLE_OSPKE 0
37+
#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */
38+
3139
/*
3240
* Make sure to add features to the correct mask
3341
*/
@@ -41,5 +49,12 @@
4149
#define DISABLED_MASK7 0
4250
#define DISABLED_MASK8 0
4351
#define DISABLED_MASK9 (DISABLE_MPX)
52+
#define DISABLED_MASK10 0
53+
#define DISABLED_MASK11 0
54+
#define DISABLED_MASK12 0
55+
#define DISABLED_MASK13 0
56+
#define DISABLED_MASK14 0
57+
#define DISABLED_MASK15 0
58+
#define DISABLED_MASK16 (DISABLE_PKU|DISABLE_OSPKE)
4459

4560
#endif /* _ASM_X86_DISABLED_FEATURES_H */

arch/x86/include/asm/fpu/internal.h

+2
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,8 @@
2525
extern void fpu__activate_curr(struct fpu *fpu);
2626
extern void fpu__activate_fpstate_read(struct fpu *fpu);
2727
extern void fpu__activate_fpstate_write(struct fpu *fpu);
28+
extern void fpu__current_fpstate_write_begin(void);
29+
extern void fpu__current_fpstate_write_end(void);
2830
extern void fpu__save(struct fpu *fpu);
2931
extern void fpu__restore(struct fpu *fpu);
3032
extern int fpu__restore_sig(void __user *buf, int ia32_frame);

0 commit comments

Comments
 (0)