|
1 |
| -# Hypercore 10 |
2 |
| - |
3 |
| -NOTE: This is the _ALPHA_ version of the upcoming [Hypercore](https://github.com/hypercore-protocol/hypercore) 10 protocol upgrade. |
4 |
| - |
5 |
| -Features all the power of Hypercore combined with |
6 |
| - |
7 |
| -* Multiwriter support |
8 |
| -* Fork recovery |
9 |
| -* Promises |
10 |
| -* Simplications and performance/scaling improvements |
11 |
| -* Internal oplog design |
12 |
| - |
13 |
| -## Install |
14 |
| - |
15 |
| -Install from NPM using the next tag |
16 |
| - |
17 |
| -```sh |
18 |
| -npm install hypercore@next |
19 |
| -``` |
20 |
| - |
21 |
| -## API |
22 |
| - |
23 |
| -#### `const core = new Hypercore(storage, [key], [options])` |
24 |
| - |
25 |
| -Make a new Hypercore instance. |
26 |
| - |
27 |
| -`storage` should be set to a directory where you want to store the data and core metadata. |
28 |
| - |
29 |
| -``` js |
30 |
| -const core = new Hypercore('./directory') // store data in ./directory |
31 |
| -``` |
32 |
| - |
33 |
| -Alternatively you can pass a function instead that is called with every filename Hypercore needs to function and return your own [abstract-random-access](https://github.com/random-access-storage/abstract-random-access) instance that is used to store the data. |
34 |
| - |
35 |
| -``` js |
36 |
| -const ram = require('random-access-memory') |
37 |
| -const core = new Hypercore((filename) => { |
38 |
| - // filename will be one of: data, bitfield, tree, signatures, key, secret_key |
39 |
| - // the data file will contain all your data concatenated. |
40 |
| - |
41 |
| - // just store all files in ram by returning a random-access-memory instance |
42 |
| - return ram() |
43 |
| -}) |
44 |
| -``` |
45 |
| - |
46 |
| -Per default Hypercore uses [random-access-file](https://github.com/random-access-storage/random-access-file). This is also useful if you want to store specific files in other directories. |
47 |
| - |
48 |
| -Hypercore will produce the following files: |
49 |
| - |
50 |
| -* `oplog` - The internal truncating journal/oplog that tracks mutations, the public key and other metadata. |
51 |
| -* `tree` - The Merkle Tree file. |
52 |
| -* `bitfield` - The bitfield of which data blocks this core has. |
53 |
| -* `data` - The raw data of each block. |
54 |
| - |
55 |
| -Note that `tree`, `data`, and `bitfield` are normally heavily sparse files. |
56 |
| - |
57 |
| -`key` can be set to a Hypercore public key. If you do not set this the public key will be loaded from storage. If no key exists a new key pair will be generated. |
58 |
| - |
59 |
| -`options` include: |
60 |
| - |
61 |
| -``` js |
62 |
| -{ |
63 |
| - createIfMissing: true, // create a new Hypercore key pair if none was present in storage |
64 |
| - overwrite: false, // overwrite any old Hypercore that might already exist |
65 |
| - sparse: true, // enable sparse mode, counting unavailable blocks towards core.length and core.byteLength |
66 |
| - valueEncoding: 'json' | 'utf-8' | 'binary', // defaults to binary |
67 |
| - encodeBatch: batch => { ... }, // optionally apply an encoding to complete batches |
68 |
| - keyPair: kp, // optionally pass the public key and secret key as a key pair |
69 |
| - encryptionKey: k, // optionally pass an encryption key to enable block encryption |
70 |
| - onwait: () => {} // hook that is called if gets are waiting for download |
71 |
| -} |
72 |
| -``` |
73 |
| - |
74 |
| -You can also set valueEncoding to any [abstract-encoding](https://github.com/mafintosh/abstract-encoding) or [compact-encoding](https://github.com/compact-encoding) instance. |
75 |
| - |
76 |
| -valueEncodings will be applied to individually blocks, even if you append batches. If you want to control encoding at the batch-level, you can use the `encodeBatch` option, which is a function that takes a batch and returns a binary-encoded batch. If you provide a custom valueEncoding, it will not be applied prior to `encodeBatch`. |
77 |
| - |
78 |
| -#### `const { length, byteLength } = await core.append(block)` |
79 |
| - |
80 |
| -Append a block of data (or an array of blocks) to the core. |
81 |
| -Returns the new length and byte length of the core. |
82 |
| - |
83 |
| -#### `const block = await core.get(index, [options])` |
84 |
| - |
85 |
| -Get a block of data. |
86 |
| -If the data is not available locally this method will prioritize and wait for the data to be downloaded. |
87 |
| - |
88 |
| -Options include |
89 |
| - |
90 |
| -``` js |
91 |
| -{ |
92 |
| - wait: true, // wait for block to be downloaded |
93 |
| - onwait: () => {}, // hook that is called if the get is waiting for download |
94 |
| - timeout: 0, // wait at max some milliseconds (0 means no timeout) |
95 |
| - valueEncoding: 'json' | 'utf-8' | 'binary' // defaults to the core's valueEncoding |
96 |
| -} |
97 |
| -``` |
98 |
| - |
99 |
| -#### `await core.truncate(newLength, [forkId])` |
100 |
| - |
101 |
| -Truncate the core to a smaller length. |
102 |
| - |
103 |
| -Per default this will update the fork id of the core to `+ 1`, but you can set the fork id you prefer with the option. |
104 |
| -Note that the fork id should be monotonely incrementing. |
105 |
| - |
106 |
| -#### `const hash = await core.treeHash([length])` |
107 |
| - |
108 |
| -Get the Merkle Tree hash of the core at a given length, defaulting to the current length of the core. |
109 |
| - |
110 |
| -#### `const stream = core.createReadStream([options])` |
111 |
| - |
112 |
| -Make a read stream. Options include: |
113 |
| - |
114 |
| -``` js |
115 |
| -{ |
116 |
| - start: 0, |
117 |
| - end: core.length, |
118 |
| - live: false, |
119 |
| - snapshot: true // auto set end to core.length on open or update it on every read |
120 |
| -} |
121 |
| -``` |
122 |
| - |
123 |
| -#### `const range = core.download([range])` |
124 |
| - |
125 |
| -Download a range of data. |
126 |
| - |
127 |
| -You can await when the range has been fully downloaded by doing: |
128 |
| - |
129 |
| -```js |
130 |
| -await range.downloaded() |
131 |
| -``` |
132 |
| - |
133 |
| -A range can have the following properties: |
134 |
| - |
135 |
| -``` js |
136 |
| -{ |
137 |
| - start: startIndex, |
138 |
| - end: nonInclusiveEndIndex, |
139 |
| - blocks: [index1, index2, ...], |
140 |
| - linear: false // download range linearly and not randomly |
141 |
| -} |
142 |
| -``` |
143 |
| - |
144 |
| -To download the full core continously (often referred to as non sparse mode) do |
145 |
| - |
146 |
| -``` js |
147 |
| -// Note that this will never be consider downloaded as the range |
148 |
| -// will keep waiting for new blocks to be appended. |
149 |
| -core.download({ start: 0, end: -1 }) |
150 |
| -``` |
151 |
| - |
152 |
| -To downloaded a discrete range of blocks pass a list of indices. |
153 |
| - |
154 |
| -```js |
155 |
| -core.download({ blocks: [4, 9, 7] }) |
156 |
| -``` |
157 |
| - |
158 |
| -To cancel downloading a range simply destroy the range instance. |
159 |
| - |
160 |
| -``` js |
161 |
| -// will stop downloading now |
162 |
| -range.destroy() |
163 |
| -``` |
164 |
| - |
165 |
| -#### `const [index, relativeOffset] = await core.seek(byteOffset)` |
166 |
| - |
167 |
| -Seek to a byte offset. |
168 |
| - |
169 |
| -Returns `(index, relativeOffset)`, where `index` is the data block the byteOffset is contained in and `relativeOffset` is |
170 |
| -the relative byte offset in the data block. |
171 |
| - |
172 |
| -#### `const updated = await core.update()` |
173 |
| - |
174 |
| -Wait for the core to try and find a signed update to it's length. |
175 |
| -Does not download any data from peers except for a proof of the new core length. |
176 |
| - |
177 |
| -``` js |
178 |
| -const updated = await core.update() |
179 |
| -console.log('core was updated?', updated, 'length is', core.length) |
180 |
| -``` |
181 |
| - |
182 |
| -#### `const info = await core.info()` |
183 |
| - |
184 |
| -Get information about this core, such as its total size in bytes. |
185 |
| - |
186 |
| -The object will look like this: |
187 |
| - |
188 |
| -```js |
189 |
| -Info { |
190 |
| - key: Buffer(...), |
191 |
| - length: 18, |
192 |
| - contiguousLength: 16, |
193 |
| - byteLength: 742, |
194 |
| - fork: 0, |
195 |
| - padding: 8 |
196 |
| -} |
197 |
| -``` |
198 |
| - |
199 |
| -#### `await core.close()` |
200 |
| - |
201 |
| -Fully close this core. |
202 |
| - |
203 |
| -#### `core.on('close')` |
204 |
| - |
205 |
| -Emitted when then core has been fully closed. |
206 |
| - |
207 |
| -#### `await core.ready()` |
208 |
| - |
209 |
| -Wait for the core to fully open. |
210 |
| - |
211 |
| -After this has called `core.length` and other properties have been set. |
212 |
| - |
213 |
| -In general you do NOT need to wait for `ready`, unless checking a synchronous property, |
214 |
| -as all internals await this themself. |
215 |
| - |
216 |
| -#### `core.on('ready')` |
217 |
| - |
218 |
| -Emitted after the core has initially opened all it's internal state. |
219 |
| - |
220 |
| -#### `core.writable` |
221 |
| - |
222 |
| -Can we append to this core? |
223 |
| - |
224 |
| -Populated after `ready` has been emitted. Will be `false` before the event. |
225 |
| - |
226 |
| -#### `core.readable` |
227 |
| - |
228 |
| -Can we read from this core? After closing the core this will be false. |
229 |
| - |
230 |
| -Populated after `ready` has been emitted. Will be `false` before the event. |
231 |
| - |
232 |
| -#### `core.key` |
233 |
| - |
234 |
| -Buffer containing the public key identifying this core. |
235 |
| - |
236 |
| -Populated after `ready` has been emitted. Will be `null` before the event. |
237 |
| - |
238 |
| -#### `core.keyPair` |
239 |
| - |
240 |
| -Object containing buffers of the core's public and secret key |
241 |
| - |
242 |
| -Populated after `ready` has been emitted. Will be `null` before the event. |
243 |
| - |
244 |
| -#### `core.discoveryKey` |
245 |
| - |
246 |
| -Buffer containing a key derived from the core's public key. |
247 |
| -In contrast to `core.key` this key does not allow you to verify the data but can be used to announce or look for peers that are sharing the same core, without leaking the core key. |
248 |
| - |
249 |
| -Populated after `ready` has been emitted. Will be `null` before the event. |
250 |
| - |
251 |
| -#### `core.encryptionKey` |
252 |
| - |
253 |
| -Buffer containing the optional block encryption key of this core. Will be `null` unless block encryption is enabled. |
254 |
| - |
255 |
| -#### `core.length` |
256 |
| - |
257 |
| -How many blocks of data are available on this core? If `sparse: false`, this will equal `core.contiguousLength`. |
258 |
| - |
259 |
| -Populated after `ready` has been emitted. Will be `0` before the event. |
260 |
| - |
261 |
| -#### `core.contiguousLength` |
262 |
| - |
263 |
| -How many blocks are contiguously available starting from the first block of this core? |
264 |
| - |
265 |
| -Populated after `ready` has been emitted. Will be `0` before the event. |
266 |
| - |
267 |
| -#### `core.contiguousByteLength` |
268 |
| - |
269 |
| -How much data is contiguously available starting from the first block of this core? |
270 |
| - |
271 |
| -Populated after `ready` has been emitted. Will be `0` before the event. |
272 |
| - |
273 |
| -#### `core.fork` |
274 |
| - |
275 |
| -What is the current fork id of this core? |
276 |
| - |
277 |
| -Populated after `ready` has been emitted. Will be `0` before the event. |
278 |
| - |
279 |
| -#### `core.padding` |
280 |
| - |
281 |
| -How much padding is applied to each block of this core? Will be `0` unless block encryption is enabled. |
282 |
| - |
283 |
| -#### `const stream = core.replicate(isInitiatorOrReplicationStream)` |
284 |
| - |
285 |
| -Create a replication stream. You should pipe this to another Hypercore instance. |
286 |
| - |
287 |
| -The `isInitiator` argument is a boolean indicating whether you are the iniatior of the connection (ie the client) |
288 |
| -or if you are the passive part (ie the server). |
289 |
| - |
290 |
| -If you are using a P2P swarm like [Hyperswarm](https://github.com/hyperswarm/hyperswarm) you can know this by checking if the swarm connection is a client socket or server socket. In Hyperswarm you can check that using the [client property on the peer details object](https://github.com/hyperswarm/hyperswarm#swarmonconnection-socket-details--) |
291 |
| - |
292 |
| -If you want to multiplex the replication over an existing Hypercore replication stream you can pass |
293 |
| -another stream instance instead of the `isInitiator` boolean. |
294 |
| - |
295 |
| -``` js |
296 |
| -// assuming we have two cores, localCore + remoteCore, sharing the same key |
297 |
| -// on a server |
298 |
| -const net = require('net') |
299 |
| -const server = net.createServer(function (socket) { |
300 |
| - socket.pipe(remoteCore.replicate(false)).pipe(socket) |
301 |
| -}) |
302 |
| - |
303 |
| -// on a client |
304 |
| -const socket = net.connect(...) |
305 |
| -socket.pipe(localCore.replicate(true)).pipe(socket) |
306 |
| -``` |
307 |
| - |
308 |
| -#### `const done = core.findingPeers()` |
309 |
| - |
310 |
| -Create a hook that tells Hypercore you are finding peers for this core in the background. Call `done` when your current discovery iteration is done. |
311 |
| -If you're using Hyperswarm, you'd normally call this after a `swarm.flush()` finishes. |
312 |
| - |
313 |
| -This allows `core.update` to wait for either the `findingPeers` hook to finish or one peer to appear before deciding whether it should wait for a merkle tree update before returning. |
314 |
| - |
315 |
| -#### `core.on('append')` |
316 |
| - |
317 |
| -Emitted when the core has been appended to (i.e. has a new length / byteLength), either locally or remotely. |
318 |
| - |
319 |
| -#### `core.on('truncate', ancestors, forkId)` |
320 |
| - |
321 |
| -Emitted when the core has been truncated, either locally or remotely. |
| 1 | +Development has moved to https://github.com/hypercore-protocol/hypercore |
0 commit comments