1 Your friendly guide to understanding the performance characteristics of this
2 crate.
3 
4 This guide assumes some familiarity with the public API of this crate, which
5 can be found here: https://docs.rs/regex
6 
7 ## Theory vs. Practice
8 
9 One of the design goals of this crate is to provide worst case linear time
10 behavior with respect to the text searched using finite state automata. This
11 means that, *in theory*, the performance of this crate is much better than most
12 regex implementations, which typically use backtracking which has worst case
13 exponential time.
14 
15 For example, try opening a Python interpreter and typing this:
16 
17     >>> import re
18     >>> re.search('(a*)*c', 'a' * 30).span()
19 
20 I'll wait.
21 
22 At some point, you'll figure out that it won't terminate any time soon. ^C it.
23 
24 The promise of this crate is that *this pathological behavior can't happen*.
25 
26 With that said, just because we have protected ourselves against worst case
27 exponential behavior doesn't mean we are immune from large constant factors
28 or places where the current regex engine isn't quite optimal. This guide will
29 detail those cases and provide guidance on how to avoid them, among other
30 bits of general advice.
31 
32 ## Thou Shalt Not Compile Regular Expressions In A Loop
33 
34 **Advice**: Use `lazy_static` to amortize the cost of `Regex` compilation.
35 
36 Don't do it unless you really don't mind paying for it. Compiling a regular
37 expression in this crate is quite expensive. It is conceivable that it may get
38 faster some day, but I wouldn't hold out hope for, say, an order of magnitude
39 improvement. In particular, compilation can take any where from a few dozen
40 microseconds to a few dozen milliseconds. Yes, milliseconds. Unicode character
41 classes, in particular, have the largest impact on compilation performance. At
42 the time of writing, for example, `\pL{100}` takes around 44ms to compile. This
43 is because `\pL` corresponds to every letter in Unicode and compilation must
44 turn it into a proper automaton that decodes a subset of UTF-8 which
45 corresponds to those letters. Compilation also spends some cycles shrinking the
46 size of the automaton.
47 
48 This means that in order to realize efficient regex matching, one must
49 *amortize the cost of compilation*. Trivially, if a call to `is_match` is
50 inside a loop, then make sure your call to `Regex::new` is *outside* that loop.
51 
52 In many programming languages, regular expressions can be conveniently defined
53 and compiled in a global scope, and code can reach out and use them as if
54 they were global static variables. In Rust, there is really no concept of
55 life-before-main, and therefore, one cannot utter this:
56 
57     static MY_REGEX: Regex = Regex::new("...").unwrap();
58 
59 Unfortunately, this would seem to imply that one must pass `Regex` objects
60 around to everywhere they are used, which can be especially painful depending
61 on how your program is structured. Thankfully, the
62 [`lazy_static`](https://crates.io/crates/lazy_static)
63 crate provides an answer that works well:
64 
65     use lazy_static::lazy_static;
66     use regex::Regex;
67 
68     fn some_helper_function(text: &str) -> bool {
69         lazy_static! {
70             static ref MY_REGEX: Regex = Regex::new("...").unwrap();
71         }
72         MY_REGEX.is_match(text)
73     }
74 
75 In other words, the `lazy_static!` macro enables us to define a `Regex` *as if*
76 it were a global static value. What is actually happening under the covers is
77 that the code inside the macro (i.e., `Regex::new(...)`) is run on *first use*
78 of `MY_REGEX` via a `Deref` impl. The implementation is admittedly magical, but
79 it's self contained and everything works exactly as you expect. In particular,
80 `MY_REGEX` can be used from multiple threads without wrapping it in an `Arc` or
81 a `Mutex`. On that note...
82 
83 ## Using a regex from multiple threads
84 
85 **Advice**: The performance impact from using a `Regex` from multiple threads
86 is likely negligible. If necessary, clone the `Regex` so that each thread gets
87 its own copy. Cloning a regex does not incur any additional memory overhead
88 than what would be used by using a `Regex` from multiple threads
89 simultaneously. *Its only cost is ergonomics.*
90 
91 It is supported and encouraged to define your regexes using `lazy_static!` as
92 if they were global static values, and then use them to search text from
93 multiple threads simultaneously.
94 
95 One might imagine that this is possible because a `Regex` represents a
96 *compiled* program, so that any allocation or mutation is already done, and is
97 therefore read-only. Unfortunately, this is not true. Each type of search
98 strategy in this crate requires some kind of mutable scratch space to use
99 *during search*. For example, when executing a DFA, its states are computed
100 lazily and reused on subsequent searches. Those states go into that mutable
101 scratch space.
102 
103 The mutable scratch space is an implementation detail, and in general, its
104 mutation should not be observable from users of this crate. Therefore, it uses
105 interior mutability. This implies that `Regex` can either only be used from one
106 thread, or it must do some sort of synchronization. Either choice is
107 reasonable, but this crate chooses the latter, in particular because it is
108 ergonomic and makes use with `lazy_static!` straight forward.
109 
110 Synchronization implies *some* amount of overhead. When a `Regex` is used from
111 a single thread, this overhead is negligible. When a `Regex` is used from
112 multiple threads simultaneously, it is possible for the overhead of
113 synchronization from contention to impact performance. The specific cases where
114 contention may happen is if you are calling any of these methods repeatedly
115 from multiple threads simultaneously:
116 
117 * shortest_match
118 * is_match
119 * find
120 * captures
121 
122 In particular, every invocation of one of these methods must synchronize with
123 other threads to retrieve its mutable scratch space before searching can start.
124 If, however, you are using one of these methods:
125 
126 * find_iter
127 * captures_iter
128 
129 Then you may not suffer from contention since the cost of synchronization is
130 amortized on *construction of the iterator*. That is, the mutable scratch space
131 is obtained when the iterator is created and retained throughout its lifetime.
132 
133 ## Only ask for what you need
134 
135 **Advice**: Prefer in this order: `is_match`, `find`, `captures`.
136 
137 There are three primary search methods on a `Regex`:
138 
139 * is_match
140 * find
141 * captures
142 
143 In general, these are ordered from fastest to slowest.
144 
145 `is_match` is fastest because it doesn't actually need to find the start or the
146 end of the leftmost-first match. It can quit immediately after it knows there
147 is a match. For example, given the regex `a+` and the haystack, `aaaaa`, the
148 search will quit after examining the first byte.
149 
150 In contrast, `find` must return both the start and end location of the
151 leftmost-first match. It can use the DFA matcher for this, but must run it
152 forwards once to find the end of the match *and then run it backwards* to find
153 the start of the match. The two scans and the cost of finding the real end of
154 the leftmost-first match make this more expensive than `is_match`.
155 
156 `captures` is the most expensive of them all because it must do what `find`
157 does, and then run either the bounded backtracker or the Pike VM to fill in the
158 capture group locations. Both of these are simulations of an NFA, which must
159 spend a lot of time shuffling states around. The DFA limits the performance hit
160 somewhat by restricting the amount of text that must be searched via an NFA
161 simulation.
162 
163 One other method not mentioned is `shortest_match`. This method has precisely
164 the same performance characteristics as `is_match`, except it will return the
165 end location of when it discovered a match. For example, given the regex `a+`
166 and the haystack `aaaaa`, `shortest_match` may return `1` as opposed to `5`,
167 the latter of which being the correct end location of the leftmost-first match.
168 
169 ## Literals in your regex may make it faster
170 
171 **Advice**: Literals can reduce the work that the regex engine needs to do. Use
172 them if you can, especially as prefixes.
173 
174 In particular, if your regex starts with a prefix literal, the prefix is
175 quickly searched before entering the (much slower) regex engine. For example,
176 given the regex `foo\w+`, the literal `foo` will be searched for using
177 Boyer-Moore. If there's no match, then no regex engine is ever used. Only when
178 there's a match is the regex engine invoked at the location of the match, which
179 effectively permits the regex engine to skip large portions of a haystack.
180 If a regex is comprised entirely of literals (possibly more than one), then
181 it's possible that the regex engine can be avoided entirely even when there's a
182 match.
183 
184 When one literal is found, Boyer-Moore is used. When multiple literals are
185 found, then an optimized version of Aho-Corasick is used.
186 
187 This optimization is in particular extended quite a bit in this crate. Here are
188 a few examples of regexes that get literal prefixes detected:
189 
190 * `(foo|bar)` detects `foo` and `bar`
191 * `(a|b)c` detects `ac` and `bc`
192 * `[ab]foo[yz]` detects `afooy`, `afooz`, `bfooy` and `bfooz`
193 * `a?b` detects `a` and `b`
194 * `a*b` detects `a` and `b`
195 * `(ab){3,6}` detects `ababab`
196 
197 Literals in anchored regexes can also be used for detecting non-matches very
198 quickly. For example, `^foo\w+` and `\w+foo$` may be able to detect a non-match
199 just by examining the first (or last) three bytes of the haystack.
200 
201 ## Unicode word boundaries may prevent the DFA from being used
202 
203 **Advice**: In most cases, `\b` should work well. If not, use `(?-u:\b)`
204 instead of `\b` if you care about consistent performance more than correctness.
205 
206 It's a sad state of the current implementation. At the moment, the DFA will try
207 to interpret Unicode word boundaries as if they were ASCII word boundaries.
208 If the DFA comes across any non-ASCII byte, it will quit and fall back to an
209 alternative matching engine that can handle Unicode word boundaries correctly.
210 The alternate matching engine is generally quite a bit slower (perhaps by an
211 order of magnitude). If necessary, this can be ameliorated in two ways.
212 
213 The first way is to add some number of literal prefixes to your regular
214 expression. Even though the DFA may not be used, specialized routines will
215 still kick in to find prefix literals quickly, which limits how much work the
216 NFA simulation will need to do.
217 
218 The second way is to give up on Unicode and use an ASCII word boundary instead.
219 One can use an ASCII word boundary by disabling Unicode support. That is,
220 instead of using `\b`, use `(?-u:\b)`.  Namely, given the regex `\b.+\b`, it
221 can be transformed into a regex that uses the DFA with `(?-u:\b).+(?-u:\b)`. It
222 is important to limit the scope of disabling the `u` flag, since it might lead
223 to a syntax error if the regex could match arbitrary bytes. For example, if one
224 wrote `(?-u)\b.+\b`, then a syntax error would be returned because `.` matches
225 any *byte* when the Unicode flag is disabled.
226 
227 The second way isn't appreciably different than just using a Unicode word
228 boundary in the first place, since the DFA will speculatively interpret it as
229 an ASCII word boundary anyway. The key difference is that if an ASCII word
230 boundary is used explicitly, then the DFA won't quit in the presence of
231 non-ASCII UTF-8 bytes. This results in giving up correctness in exchange for
232 more consistent performance.
233 
234 N.B. When using `bytes::Regex`, Unicode support is disabled by default, so one
235 can simply write `\b` to get an ASCII word boundary.
236 
237 ## Excessive counting can lead to exponential state blow up in the DFA
238 
239 **Advice**: Don't write regexes that cause DFA state blow up if you care about
240 match performance.
241 
242 Wait, didn't I say that this crate guards against exponential worst cases?
243 Well, it turns out that the process of converting an NFA to a DFA can lead to
244 an exponential blow up in the number of states. This crate specifically guards
245 against exponential blow up by doing two things:
246 
247 1. The DFA is computed lazily. That is, a state in the DFA only exists in
248    memory if it is visited. In particular, the lazy DFA guarantees that *at
249    most* one state is created for every byte of input. This, on its own,
250    guarantees linear time complexity.
251 2. Of course, creating a new state for *every* byte of input means that search
252    will go incredibly slow because of very large constant factors. On top of
253    that, creating a state for every byte in a large haystack could result in
254    exorbitant memory usage. To ameliorate this, the DFA bounds the number of
255    states it can store. Once it reaches its limit, it flushes its cache. This
256    prevents reuse of states that it already computed. If the cache is flushed
257    too frequently, then the DFA will give up and execution will fall back to
258    one of the NFA simulations.
259 
260 In effect, this crate will detect exponential state blow up and fall back to
261 a search routine with fixed memory requirements. This does, however, mean that
262 searching will be much slower than one might expect. Regexes that rely on
263 counting in particular are strong aggravators of this behavior. For example,
264 matching `[01]*1[01]{20}$` against a random sequence of `0`s and `1`s.
265 
266 In the future, it may be possible to increase the bound that the DFA uses,
267 which would allow the caller to choose how much memory they're willing to
268 spend.
269 
270 ## Resist the temptation to "optimize" regexes
271 
272 **Advice**: This ain't a backtracking engine.
273 
274 An entire book was written on how to optimize Perl-style regular expressions.
275 Most of those techniques are not applicable for this library. For example,
276 there is no problem with using non-greedy matching or having lots of
277 alternations in your regex.
278