Skip to content

Commit 54690c2

Browse files
authored
doc: fix typos
This commit fixes an amazing number of new typos that I introduced in the regex 1.9 release. The typos were found and fixed by the typos-cli tool.[1] [1]: https://crates.io/crates/typos-cli PR rust-lang#1026
1 parent 7c3463d commit 54690c2

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+76
-76
lines changed

CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,7 @@ More specifically, any ASCII character except for `[0-9A-Za-z<>]` can now be
187187
escaped. Also, a new routine, `is_escapeable_character`, has been added to
188188
`regex-syntax` to query whether a character is escapeable or not.
189189
* [FEATURE #547](https://github.com/rust-lang/regex/issues/547):
190-
Add `Regex::captures_at`. This filles a hole in the API, but doesn't otherwise
190+
Add `Regex::captures_at`. This fills a hole in the API, but doesn't otherwise
191191
introduce any new expressive power.
192192
* [FEATURE #595](https://github.com/rust-lang/regex/issues/595):
193193
Capture group names are now Unicode-aware. They can now begin with either a `_`

regex-automata/src/dfa/automaton.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1074,7 +1074,7 @@ pub unsafe trait Automaton {
10741074
/// // encoding of any Unicode scalar value except for 'a', 'b' or 'c'.
10751075
/// // That translates to a much more complicated DFA, and also
10761076
/// // inhibits the 'accelerator' optimization that we are trying to
1077-
/// // demostrate in this example.
1077+
/// // demonstrate in this example.
10781078
/// .syntax(syntax::Config::new().unicode(false).utf8(false))
10791079
/// .build("[^abc]+a")?;
10801080
///

regex-automata/src/dfa/dense.rs

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2109,7 +2109,7 @@ impl<T: AsRef<[u32]>> DFA<T> {
21092109
/// let mut buf = vec![0; original_dfa.write_to_len()];
21102110
/// // This is guaranteed to succeed, because the only serialization error
21112111
/// // that can occur is when the provided buffer is too small. But
2112-
/// // write_to_len guarantees a correct sie.
2112+
/// // write_to_len guarantees a correct size.
21132113
/// let written = original_dfa.write_to_native_endian(&mut buf).unwrap();
21142114
/// // But this is not guaranteed to succeed! In particular,
21152115
/// // deserialization requires proper alignment for &[u32], but our buffer
@@ -3336,7 +3336,7 @@ impl<'a> TransitionTable<&'a [u32]> {
33363336
///
33373337
/// # Safety
33383338
///
3339-
/// This routine is not safe because it does not check the valdity of the
3339+
/// This routine is not safe because it does not check the validity of the
33403340
/// transition table itself. In particular, the transition table can be
33413341
/// quite large, so checking its validity can be somewhat expensive. An
33423342
/// invalid transition table is not safe because other code may rely on the
@@ -3929,7 +3929,7 @@ impl<'a> StartTable<&'a [u32]> {
39293929
///
39303930
/// # Safety
39313931
///
3932-
/// This routine is not safe because it does not check the valdity of the
3932+
/// This routine is not safe because it does not check the validity of the
39333933
/// starting state IDs themselves. In particular, the number of starting
39343934
/// IDs can be of variable length, so it's possible that checking their
39353935
/// validity cannot be done in constant time. An invalid starting state

regex-automata/src/dfa/determinize.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -539,7 +539,7 @@ impl<'a> Runner<'a> {
539539
}
540540
let state = builder.to_state();
541541
// States use reference counting internally, so we only need to count
542-
// their memroy usage once.
542+
// their memory usage once.
543543
self.memory_usage_state += state.memory_usage();
544544
self.builder_states.push(state.clone());
545545
self.cache.insert(state, id);

regex-automata/src/dfa/minimize.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ impl<'a> Minimizer<'a> {
152152

153153
// At this point, we now have a minimal partitioning of states, where
154154
// each partition is an equivalence class of DFA states. Now we need to
155-
// use this partioning to update the DFA to only contain one state for
155+
// use this partitioning to update the DFA to only contain one state for
156156
// each partition.
157157

158158
// Create a map from DFA state ID to the representative ID of the

regex-automata/src/dfa/mod.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
/*!
2-
A module for building and searching with determinstic finite automata (DFAs).
2+
A module for building and searching with deterministic finite automata (DFAs).
33
44
Like other modules in this crate, DFAs support a rich regex syntax with Unicode
55
features. DFAs also have extensive options for configuring the best space vs
@@ -267,7 +267,7 @@ the regexes in this module are almost universally slow to compile, especially
267267
when they contain large Unicode character classes. For example, on my system,
268268
compiling `\w{50}` takes about 1 second and almost 15MB of memory! (Compiling
269269
a sparse regex takes about the same time but only uses about 1.2MB of
270-
memory.) Conversly, compiling the same regex without Unicode support, e.g.,
270+
memory.) Conversely, compiling the same regex without Unicode support, e.g.,
271271
`(?-u)\w{50}`, takes under 1 millisecond and about 15KB of memory. For this
272272
reason, you should only use Unicode character classes if you absolutely need
273273
them! (They are enabled by default though.)

regex-automata/src/dfa/regex.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -590,7 +590,7 @@ impl<A: Automaton> Regex<A> {
590590
///
591591
/// The type parameters are as follows:
592592
///
593-
/// * `A` represents the type of the underyling DFA that implements the
593+
/// * `A` represents the type of the underlying DFA that implements the
594594
/// [`Automaton`] trait.
595595
///
596596
/// The lifetime parameters are as follows:

regex-automata/src/hybrid/dfa.rs

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ use crate::{
3232
},
3333
};
3434

35-
/// The mininum number of states that a lazy DFA's cache size must support.
35+
/// The minimum number of states that a lazy DFA's cache size must support.
3636
///
3737
/// This is checked at time of construction to ensure that at least some small
3838
/// number of states can fit in the given capacity allotment. If we can't fit
@@ -2332,7 +2332,7 @@ impl<'i, 'c> Lazy<'i, 'c> {
23322332
"lazy DFA cache has been cleared {} times, \
23332333
which exceeds the limit of {}, \
23342334
AND its bytes searched per state is less \
2335-
than the configured mininum of {}, \
2335+
than the configured minimum of {}, \
23362336
therefore lazy DFA is giving up \
23372337
(bytes searched since cache clear = {}, \
23382338
number of states = {})",
@@ -2348,7 +2348,7 @@ impl<'i, 'c> Lazy<'i, 'c> {
23482348
"lazy DFA cache has been cleared {} times, \
23492349
which exceeds the limit of {}, \
23502350
AND its bytes searched per state is greater \
2351-
than the configured mininum of {}, \
2351+
than the configured minimum of {}, \
23522352
therefore lazy DFA is continuing! \
23532353
(bytes searched since cache clear = {}, \
23542354
number of states = {})",
@@ -2771,7 +2771,7 @@ enum StateSaver {
27712771
/// is stored in 'Saved' since it may have changed.
27722772
ToSave { id: LazyStateID, state: State },
27732773
/// An ID that of a state that has been persisted through a lazy DFA
2774-
/// cache clearing. The ID recorded here corresonds to an ID that was
2774+
/// cache clearing. The ID recorded here corresponds to an ID that was
27752775
/// once marked as ToSave. The IDs are likely not equivalent even though
27762776
/// the states they point to are.
27772777
Saved(LazyStateID),

regex-automata/src/hybrid/mod.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
/*!
2-
A module for building and searching with lazy determinstic finite automata
2+
A module for building and searching with lazy deterministic finite automata
33
(DFAs).
44
55
Like other modules in this crate, lazy DFAs support a rich regex syntax with

regex-automata/src/hybrid/search.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -188,7 +188,7 @@ fn find_fwd_imp(
188188
// mentioned above was a pretty big pessimization in some other
189189
// cases. Namely, it resulted in too much ping-ponging into and out
190190
// of the loop, which resulted in nearly ~2x regressions in search
191-
// time when compared to the originaly lazy DFA in the regex crate.
191+
// time when compared to the originally lazy DFA in the regex crate.
192192
// So I've removed the second loop unrolling that targets the
193193
// self-transition case.
194194
let mut prev_sid = sid;

regex-automata/src/lib.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -330,7 +330,7 @@ at search time and it requires the caller to opt into this.
330330
331331
There are other ways for regex engines to fail in this crate, but the above
332332
two should represent the general theme of failures one can find. Dealing
333-
with these failures is, in part, one the reaponsibilities of the [meta regex
333+
with these failures is, in part, one the responsibilities of the [meta regex
334334
engine](meta). Notice, for example, that the meta regex engine exposes an API
335335
that never returns an error nor panics. It carefully manages all of the ways
336336
in which the regex engines can fail and either avoids the predictable ones

regex-automata/src/meta/error.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ impl core::fmt::Display for BuildError {
120120
///
121121
/// The first is one where potential quadratic behavior has been detected.
122122
/// In this case, whatever optimization that led to this behavior should be
123-
/// stopped, and the next best strategy shouldbe used.
123+
/// stopped, and the next best strategy should be used.
124124
///
125125
/// The second indicates that the underlying regex engine has failed for some
126126
/// reason. This usually occurs because either a lazy DFA's cache has become
@@ -194,7 +194,7 @@ impl From<RetryQuadraticError> for RetryError {
194194
/// Note that this has convenient `From` impls that will automatically
195195
/// convert a `MatchError` into this error. This works because the meta
196196
/// regex engine internals guarantee that errors like `HaystackTooLong` and
197-
/// `UnsupportAnchored` will never occur. The only errors left are `Quit` and
197+
/// `UnsupportedAnchored` will never occur. The only errors left are `Quit` and
198198
/// `GaveUp`, which both correspond to this "failure" error.
199199
#[derive(Debug)]
200200
pub(crate) struct RetryFailError {

regex-automata/src/meta/regex.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2277,7 +2277,7 @@ impl<'r, 'h> core::iter::FusedIterator for SplitN<'r, 'h> {}
22772277
/// explicitly separated from the the core regex object (such as a
22782278
/// [`thompson::NFA`](crate::nfa::thompson::NFA)) so that the read-only regex
22792279
/// object can be shared across multiple threads simultaneously without any
2280-
/// synchronization. Conversly, a `Cache` must either be duplicated if using
2280+
/// synchronization. Conversely, a `Cache` must either be duplicated if using
22812281
/// the same `Regex` from multiple threads, or else there must be some kind of
22822282
/// synchronization that guarantees exclusive access while it's in use by one
22832283
/// thread.

regex-automata/src/nfa/thompson/backtrack.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -825,7 +825,7 @@ impl BoundedBacktracker {
825825
#[inline]
826826
pub fn max_haystack_len(&self) -> usize {
827827
// The capacity given in the config is "bytes of heap memory," but the
828-
// capacity we use here is "number of bits." So conver the capacity in
828+
// capacity we use here is "number of bits." So convert the capacity in
829829
// bytes to the capacity in bits.
830830
let capacity = 8 * self.get_config().get_visited_capacity();
831831
let blocks = div_ceil(capacity, Visited::BLOCK_SIZE);
@@ -1845,7 +1845,7 @@ impl Visited {
18451845
/// Reset this visited set to work with the given bounded backtracker.
18461846
fn reset(&mut self, re: &BoundedBacktracker) {
18471847
// The capacity given in the config is "bytes of heap memory," but the
1848-
// capacity we use here is "number of bits." So conver the capacity in
1848+
// capacity we use here is "number of bits." So convert the capacity in
18491849
// bytes to the capacity in bits.
18501850
let capacity = 8 * re.get_config().get_visited_capacity();
18511851
let blocks = div_ceil(capacity, Visited::BLOCK_SIZE);

regex-automata/src/nfa/thompson/compiler.rs

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1008,7 +1008,7 @@ impl Compiler {
10081008
/// but no more than `max` times.
10091009
///
10101010
/// When `greedy` is true, then the preference is for the expression to
1011-
/// match as much as possible. Otheriwse, it will match as little as
1011+
/// match as much as possible. Otherwise, it will match as little as
10121012
/// possible.
10131013
fn c_bounded(
10141014
&self,
@@ -1074,7 +1074,7 @@ impl Compiler {
10741074
/// integer is likely to run afoul of any configured size limits.)
10751075
///
10761076
/// When `greedy` is true, then the preference is for the expression to
1077-
/// match as much as possible. Otheriwse, it will match as little as
1077+
/// match as much as possible. Otherwise, it will match as little as
10781078
/// possible.
10791079
fn c_at_least(
10801080
&self,
@@ -1155,7 +1155,7 @@ impl Compiler {
11551155
/// times.
11561156
///
11571157
/// When `greedy` is true, then the preference is for the expression to
1158-
/// match as much as possible. Otheriwse, it will match as little as
1158+
/// match as much as possible. Otherwise, it will match as little as
11591159
/// possible.
11601160
fn c_zero_or_one(
11611161
&self,

regex-automata/src/nfa/thompson/error.rs

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ use crate::util::{
33
primitives::{PatternID, StateID},
44
};
55

6-
/// An error that can occured during the construction of a thompson NFA.
6+
/// An error that can occurred during the construction of a thompson NFA.
77
///
88
/// This error does not provide many introspection capabilities. There are
99
/// generally only two things you can do with it:
@@ -161,13 +161,13 @@ impl core::fmt::Display for BuildError {
161161
}
162162
BuildErrorKind::TooManyPatterns { given, limit } => write!(
163163
f,
164-
"attemped to compile {} patterns, \
164+
"attempted to compile {} patterns, \
165165
which exceeds the limit of {}",
166166
given, limit,
167167
),
168168
BuildErrorKind::TooManyStates { given, limit } => write!(
169169
f,
170-
"attemped to compile {} NFA states, \
170+
"attempted to compile {} NFA states, \
171171
which exceeds the limit of {}",
172172
given, limit,
173173
),

regex-automata/src/nfa/thompson/nfa.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1587,7 +1587,7 @@ pub enum State {
15871587
/// in case they are useful. But mostly, all you'll need is `next` and
15881588
/// `slot`.
15891589
Capture {
1590-
/// The state to transtition to, unconditionally.
1590+
/// The state to transition to, unconditionally.
15911591
next: StateID,
15921592
/// The pattern ID that this capture belongs to.
15931593
pattern_id: PatternID,

regex-automata/src/nfa/thompson/pikevm.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2297,7 +2297,7 @@ impl Counters {
22972297
trace!("===== START PikeVM Instrumentation Output =====");
22982298
// We take the top-K most occurring state sets. Otherwise the output
22992299
// is likely to be overwhelming. And we probably only care about the
2300-
// most frequently occuring ones anyway.
2300+
// most frequently occurring ones anyway.
23012301
const LIMIT: usize = 20;
23022302
let mut set_counts =
23032303
self.state_sets.iter().collect::<Vec<(&Vec<StateID>, &u64)>>();

regex-automata/src/nfa/thompson/range_trie.rs

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ sequences of ranges are sorted, and any corresponding ranges are either
131131
exactly equivalent or non-overlapping.
132132
133133
In effect, a range trie is building a DFA from a sequence of arbitrary byte
134-
ranges. But it uses an algoritm custom tailored to its input, so it is not as
134+
ranges. But it uses an algorithm custom tailored to its input, so it is not as
135135
costly as traditional DFA construction. While it is still quite a bit more
136136
costly than the forward case (which only needs Daciuk's algorithm), it winds
137137
up saving a substantial amount of time if one is doing a full DFA powerset
@@ -188,7 +188,7 @@ pub struct RangeTrie {
188188
/// A stack for traversing this trie to yield sequences of byte ranges in
189189
/// lexicographic order.
190190
iter_stack: RefCell<Vec<NextIter>>,
191-
/// A bufer that stores the current sequence during iteration.
191+
/// A buffer that stores the current sequence during iteration.
192192
iter_ranges: RefCell<Vec<Utf8Range>>,
193193
/// A stack used for traversing the trie in order to (deeply) duplicate
194194
/// a state. States are recursively duplicated when ranges are split.
@@ -622,7 +622,7 @@ struct NextIter {
622622
}
623623

624624
/// The next state to process during insertion and any remaining ranges that we
625-
/// want to add for a partcular sequence of ranges. The first such instance
625+
/// want to add for a particular sequence of ranges. The first such instance
626626
/// is always the root state along with all ranges given.
627627
#[derive(Clone, Debug)]
628628
struct NextInsert {

regex-automata/src/util/alphabet.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@ impl Unit {
132132
}
133133
}
134134

135-
/// If this unit is an "end of input" sentinel, then return the underyling
135+
/// If this unit is an "end of input" sentinel, then return the underlying
136136
/// sentinel value that was given to [`Unit::eoi`]. Otherwise return
137137
/// `None`.
138138
pub fn as_eoi(self) -> Option<u16> {

regex-automata/src/util/captures.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1453,7 +1453,7 @@ impl GroupInfo {
14531453
/// sequence of patterns yields a sequence of possible group names. The
14541454
/// index of each pattern in the sequence corresponds to its `PatternID`,
14551455
/// and the index of each group in each pattern's sequence corresponds to
1456-
/// its coresponding group index.
1456+
/// its corresponding group index.
14571457
///
14581458
/// While this constructor is very generic and therefore perhaps hard to
14591459
/// chew on, an example of a valid concrete type that can be passed to

regex-automata/src/util/determinize/mod.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -205,7 +205,7 @@ pub(crate) fn next(
205205
&& unit.is_byte(lookm.get_line_terminator())
206206
{
207207
// Why only handle StartLF here and not Start? That's because Start
208-
// can only impact the starting state, which is speical cased in
208+
// can only impact the starting state, which is special cased in
209209
// start state handling.
210210
builder.set_look_have(|have| have.insert(Look::StartLF));
211211
}
@@ -412,7 +412,7 @@ pub(crate) fn epsilon_closure(
412412
/// and whether this state is being generated for a transition over a word byte
413413
/// when applicable) that are true immediately prior to transitioning into this
414414
/// state (via `builder.look_have()`). The match pattern IDs should correspond
415-
/// to matches that occured on the previous transition, since all matches are
415+
/// to matches that occurred on the previous transition, since all matches are
416416
/// delayed by one byte. The things that should _not_ be set are look-ahead
417417
/// assertions (EndLF, End and whether the next byte is a word byte or not).
418418
/// The builder state should also not have anything in `look_need` set, as this

regex-automata/src/util/determinize/state.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ DFA state to check if it already exists. If it does, then there's no need to
6060
freeze it into a `State`. It it doesn't exist, then `StateBuilderNFA::to_state`
6161
can be called to freeze the builder into an immutable `State`. In either
6262
case, `clear` should be called on the builder to turn it back into a
63-
`StateBuilderEmpty` that reuses the underyling memory.
63+
`StateBuilderEmpty` that reuses the underlying memory.
6464
6565
The main purpose for splitting the builder into these distinct types is to
6666
make it impossible to do things like adding a pattern ID after adding an NFA
@@ -103,7 +103,7 @@ use crate::util::{
103103
/// This type is intended to be used only in NFA-to-DFA conversion via powerset
104104
/// construction.
105105
///
106-
/// It may be cheaply cloned and accessed safely from mulitple threads
106+
/// It may be cheaply cloned and accessed safely from multiple threads
107107
/// simultaneously.
108108
#[derive(Clone, Eq, Hash, PartialEq, PartialOrd, Ord)]
109109
pub(crate) struct State(Arc<[u8]>);

regex-automata/src/util/iter.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
Generic helpers for iteration of matches from a regex engine in a haystack.
33
44
The principle type in this module is a [`Searcher`]. A `Searcher` provides
5-
its own lower level iterater-like API in addition to methods for constructing
5+
its own lower level iterator-like API in addition to methods for constructing
66
types that implement `Iterator`. The documentation for `Searcher` explains a
77
bit more about why these different APIs exist.
88

regex-automata/src/util/lazy.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ mod lazy {
159159
impl<T, F: Fn() -> T> Lazy<T, F> {
160160
/// Get the underlying lazy value. If it hasn't been initialized
161161
/// yet, then always attempt to initialize it (even if some other
162-
/// thread is initializing it) and atomicly attach it to this lazy
162+
/// thread is initializing it) and atomically attach it to this lazy
163163
/// value before returning it.
164164
pub(super) fn get(&self) -> &T {
165165
if let Some(data) = self.poll() {

regex-automata/src/util/look.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -972,8 +972,8 @@ impl core::fmt::Display for UnicodeWordBoundaryError {
972972
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
973973
write!(
974974
f,
975-
"Unicode-aware \\b and \\B are unavailabe because the \
976-
requiste data tables are missing, please enable the \
975+
"Unicode-aware \\b and \\B are unavailable because the \
976+
requisite data tables are missing, please enable the \
977977
unicode-word-boundary feature"
978978
)
979979
}

0 commit comments

Comments
 (0)