Skip to content

Commit 479b683

Browse files
authored
feat: candid subtype check (#3171)
Builds on (in reverse order) * #3170 feat: optimized candid subtype check (adding a global type table of types involved in candid subtype checks) * #3151 feat: candid subtype check, during deserialization (first integration of subtype checks with deserialization) * #3115 feat: implementation of rts candid subtyping check (implementaton of (unextended) type table base candid subtype checks Since this PR includes more test and fixes to previous PR, its probably best to just review this one as a diff against master: Over #3170, this PR: - Introduces a stack allocated memo table allocated on entry to deserialize and shared amongst all recursive calls. The `extended` boolean flag is replaced by a possibly null memo table (`bit_rel_opt`) - the table is `null` for extended deserialization of stable values (for which subtype checks on references are just ommitted since unnecessary). - Generalises the subtype check and bitrel cache to support sharing across multiple calls and caching of both positive and *negative* subtype test results: - Adds 1 bit per pair of types in subtype `cache` to record true/false outcome separately from plain addition to cache. - Adjust each exit point that returns false to invalidate the positive subtyping assumption already in the cache (thus recording the negative result in the cache). Unfortunately caching negative results doubles the space required to store the cache (we now need 2 bits, not 1, for every possible pair of type indices). This subtyping between type tables with 128 entries each will consume 8K = (2 * 2 * 128 * 128/ 8) of stack space. Not great, but not terrible either. The first, `visited` bit (bit 0 per (t1,t2) pair), is stored in unnegated form. The second, `related` bit (bit 1 per (t1,t2) type pair), is stored in negated form so that assuming the relation holds when first visited is actually a no-op (since the matrix is initialized with zero bits). Also fixes a lurking bug where I failed to skip the value when the subtype check fails (meaning previous PR were buggy). - [x] create the cache once on entry to deserialize and share it between calls to idl_sub - [x] avoid explicitly assuming true by defaulting to full relations on entry (perhaps by inverting true and false bits) - [ ] ~only produce global type table entries in unextended deserialization.~ Not worth the effort. - [x] add basic test for recursive types - [x] overflow check on Stack.dynamic_alloc - [ ] idl_sub on future types. - [x] fix broken re-initialization of bitrel on every sub type check - [x] integrate with changes due to to_candid/from_candid #3155
1 parent 6696467 commit 479b683

37 files changed

+2442
-75
lines changed

Changelog.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,16 @@
22

33
* motoko (`moc`)
44

5+
* BREAKING CHANGE
6+
7+
Motoko now implements Candid 1.4 (dfinity/candid#311).
8+
9+
In particular, when deserializing an actor or function reference,
10+
Motoko will now first check that the type of the deserialized reference
11+
is a subtype of the expected type and act accordingly.
12+
13+
Very few users should be affected by this change in behaviour.
14+
515
* BREAKING CHANGE
616

717
On the IC, the act of making a call to a canister function can fail, so that the call cannot (and will not be) performed.

nix/sources.json

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,10 @@
66
"homepage": "",
77
"owner": "dfinity",
88
"repo": "candid",
9-
"rev": "a555d77704d691bb8f34e21a049d44ba0acee3f8",
10-
"sha256": "0vn171lcadpznrl5nq2mws2zjjqj9jxyvndb2is3dixbjqyvjssx",
9+
"rev": "fa27eef96c96c2f774a479622996b42c7ae6c1bd",
10+
"sha256": "18zhn0iyakq1212jizc406v9x275nivdnnwx5h2130ci10d7f1ah",
1111
"type": "tarball",
12-
"url": "https://github.com/dfinity/candid/archive/a555d77704d691bb8f34e21a049d44ba0acee3f8.tar.gz",
12+
"url": "https://github.com/dfinity/candid/archive/fa27eef96c96c2f774a479622996b42c7ae6c1bd.tar.gz",
1313
"url_template": "https://github.com/<owner>/<repo>/archive/<rev>.tar.gz"
1414
},
1515
"esm": {

rts/motoko-rts-tests/src/bitrel.rs

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
use motoko_rts::bitrel::BitRel;
2+
use motoko_rts::types::{Value, Words};
3+
4+
pub unsafe fn test() {
5+
println!("Testing bitrel ...");
6+
7+
const K: u32 = 128;
8+
9+
const N: usize = (2 * K * K * 2 / usize::BITS) as usize;
10+
11+
let mut cache: [u32; N] = [0xFFFFFFF; N];
12+
13+
assert_eq!(usize::BITS, 32);
14+
for size1 in 0..K {
15+
for size2 in 0..K {
16+
let w = BitRel::words(size1, size2);
17+
let bitrel = BitRel {
18+
ptr: &mut cache[0],
19+
end: &mut cache[w as usize],
20+
size1: size1,
21+
size2: size2,
22+
};
23+
bitrel.init();
24+
for i in 0..size1 {
25+
for j in 0..size2 {
26+
// initially unvisited
27+
assert!(!bitrel.visited(true, i, j)); // co
28+
assert!(!bitrel.visited(false, j, i)); // contra
29+
30+
// initially related
31+
assert!(bitrel.related(true, i, j)); // co
32+
assert!(bitrel.related(false, j, i)); // contra
33+
34+
// test visiting
35+
// co
36+
bitrel.visit(true, i, j);
37+
assert!(bitrel.visited(true, i, j));
38+
// contra
39+
bitrel.visit(false, j, i);
40+
assert!(bitrel.visited(false, j, i));
41+
42+
// test refutation
43+
// co
44+
bitrel.assume(true, i, j);
45+
assert!(bitrel.related(true, i, j));
46+
bitrel.disprove(true, i, j);
47+
assert!(!bitrel.related(true, i, j));
48+
// contra
49+
bitrel.assume(false, j, i);
50+
assert!(bitrel.related(false, j, i));
51+
bitrel.disprove(false, j, i);
52+
assert!(!bitrel.related(false, j, i));
53+
}
54+
}
55+
}
56+
}
57+
}

rts/motoko-rts-tests/src/main.rs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22

33
mod bigint;
44
mod bitmap;
5+
mod bitrel;
56
mod continuation_table;
67
mod crc32;
78
mod gc;
@@ -24,6 +25,7 @@ fn main() {
2425
unsafe {
2526
bigint::test();
2627
bitmap::test();
28+
bitrel::test();
2729
continuation_table::test();
2830
crc32::test();
2931
gc::test();

rts/motoko-rts/src/bitrel.rs

Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
//! This module implements a simple subtype cache used by the compiler (in generated code)
2+
3+
use crate::constants::WORD_SIZE;
4+
use crate::idl_trap_with;
5+
use crate::mem_utils::memzero;
6+
use crate::types::Words;
7+
8+
const BITS: u32 = 2;
9+
10+
#[repr(packed)]
11+
pub struct BitRel {
12+
/// Pointer into the bit set
13+
pub ptr: *mut u32,
14+
/// Pointer to the end of the bit set
15+
/// must allow at least 2 * size1 * size2 bits
16+
pub end: *mut u32,
17+
pub size1: u32,
18+
pub size2: u32,
19+
}
20+
21+
impl BitRel {
22+
pub fn words(size1: u32, size2: u32) -> u32 {
23+
return ((2 * size1 * size2 * BITS) + (usize::BITS - 1)) / usize::BITS;
24+
}
25+
26+
pub unsafe fn init(&self) {
27+
if (self.end as usize) < (self.ptr as usize) {
28+
idl_trap_with("BitRel invalid fields");
29+
};
30+
31+
let bytes = ((self.end as usize) - (self.ptr as usize)) as u32;
32+
if bytes != BitRel::words(self.size1, self.size2) * WORD_SIZE {
33+
idl_trap_with("BitRel missized");
34+
};
35+
memzero(self.ptr as usize, Words(bytes / WORD_SIZE));
36+
}
37+
38+
unsafe fn locate_ptr_bit(&self, p: bool, i_j: u32, j_i: u32, bit: u32) -> (*mut u32, u32) {
39+
let size1 = self.size1;
40+
let size2 = self.size2;
41+
let (base, i, j) = if p { (0, i_j, j_i) } else { (size1, j_i, i_j) };
42+
debug_assert!(i < size1);
43+
debug_assert!(j < size2);
44+
debug_assert!(bit < BITS);
45+
let k = ((base + i) * size2 + j) * BITS + bit;
46+
let word = (k / usize::BITS) as usize;
47+
let bit = (k % usize::BITS) as u32;
48+
let ptr = self.ptr.add(word);
49+
if ptr > self.end {
50+
idl_trap_with("BitRel indices out of bounds");
51+
};
52+
return (ptr, bit);
53+
}
54+
55+
unsafe fn set(&self, p: bool, i_j: u32, j_i: u32, bit: u32, v: bool) {
56+
let (ptr, bit) = self.locate_ptr_bit(p, i_j, j_i, bit);
57+
if v {
58+
*ptr = *ptr | (1 << bit);
59+
} else {
60+
*ptr = *ptr & !(1 << bit);
61+
}
62+
}
63+
64+
unsafe fn get(&self, p: bool, i_j: u32, j_i: u32, bit: u32) -> bool {
65+
let (ptr, bit) = self.locate_ptr_bit(p, i_j, j_i, bit);
66+
let mask = 1 << bit;
67+
return *ptr & mask == mask;
68+
}
69+
70+
pub unsafe fn visited(&self, p: bool, i_j: u32, j_i: u32) -> bool {
71+
self.get(p, i_j, j_i, 0)
72+
}
73+
74+
pub unsafe fn visit(&self, p: bool, i_j: u32, j_i: u32) {
75+
self.set(p, i_j, j_i, 0, true)
76+
}
77+
78+
#[allow(dead_code)]
79+
// NB: we store related bits in negated form to avoid setting on assumption
80+
// This code is a nop in production code.
81+
pub unsafe fn assume(&self, p: bool, i_j: u32, j_i: u32) {
82+
debug_assert!(!self.get(p, i_j, j_i, 1));
83+
}
84+
85+
pub unsafe fn related(&self, p: bool, i_j: u32, j_i: u32) -> bool {
86+
!self.get(p, i_j, j_i, 1)
87+
}
88+
89+
pub unsafe fn disprove(&self, p: bool, i_j: u32, j_i: u32) {
90+
self.set(p, i_j, j_i, 1, true)
91+
}
92+
}

0 commit comments

Comments
 (0)