The change aims to address shortcomings in the
unsafeusage for vec initialization
- cyclomatic_complexity warning - function is too big
And solve them without sacrificing too much performance.
I addressed the complexity issue by splitting out the
number_of_contours > 1 case into another private method. This method is
#[inline]ed to eliminate performance impact.
set_len(m) initialization was trickier. A simple safe initialization with a dummy
Vertex provided a drop in solution. But performed noticeably worse.
vec![dummy; m] init
name control ns/iter change ns/iter diff ns/iter diff % speedup get_glyph_shape_deja_vu_mono 12,949 15,024 2,075 16.02% x 0.86 get_glyph_shape_gudea 11,407 14,336 2,929 25.68% x 0.80
After staring at the algorithm for a bit I found it a little odd. The first thing it does is find "flag" data for the vertices and saves it as
Vertex data at the end of the vec. However, it isn't vertex data, it is simply used to compute the actual vertex data which is added from the beginning of the vec. The real
Vertexs then flow over the flag data, but in real usage the read flag data will always be ahead. To me this is all a bit weird.
I separated the flag data into it's own vec, which removes the need to add fake-vertex structs to the back end of the vec. It also separates something that is separate, allows for
FlagData to be a smaller struct than
Vertex with specialized types.
Adding in some other minor type changes and byte reading improvements and we have a safe version of the algo, without too much performance loss.
name control ns/iter change ns/iter diff ns/iter diff % speedup get_glyph_shape_deja_vu_mono 12,949 12,861 -88 -0.68% x 1.01 get_glyph_shape_gudea 11,407 11,901 494 4.33% x 0.96