Ten years ago, while writing a physics engine (opens in new tab), I learned about the Barnes-Hut algorithm (opens in new tab) for the gravitational n-body problem (opens in new tab). Normally, computing the Newtonian gravitational forces between n bodies requires \(O(n^2)\) evaluations of Newton’s law of universal gravitation (opens in new tab), as every body exerts a force on every other body in the system. Barnes-Hut is an approximation that brings that complexity down to \(O(n \log n)\), by treating clusters of faraway objects as single particles. An octree (opens in new tab) is used to subdivide the space, and when a certain threshold is met, an internal node may be treated as a point particle at its children’s center of mass, rather than recursing into it.
I didn’t use the Barnes-Hut algorithm at the time. But years later I was surprised to see it being used in the seemingly unrelated context of deep learning. t-SNE (opens in new tab) is a dimensionality reduction technique often used for visualizing high-dimensional data like word embeddings. It attempts to map points that are close in the high-dimensional space to points that are still close in a low-dimensional space, preserving clusters in the high-dimensional representation. It turns out that t-SNE can be accelerated by applying the same Barnes-Hut technique as gravitational simulation, since they both involve an inverse square law computed between pairs of points. In this post, I’m going to take the Barnes-Hut t-SNE implementation (opens in new tab) by the original inventor of t-SNE, Laurens van der Maaten (with Geoffrey Hinton), and show how its performance can further be increased while keeping the algorithm the same.
Spotlight: Event
Microsoft at CVPR 2024
Microsoft is a proud sponsor and active participant of CVPR 2024, which focuses on advancements in computer vision and pattern recognition.
The first step is to check out and build the code:
$ git clone git@github.com:lvdmaaten/bhtsne.git Cloning into 'bhtsne'... remote: Counting objects: 205, done. remote: Total 205 (delta 0), reused 0 (delta 0), pack-reused 205 Receiving objects: 100% (205/205), 83.59 KiB | 2.70 MiB/s, done. Resolving deltas: 100% (110/110), done. $ cd bhtsne $ g++ -O2 -g -Wall -Wno-sign-compare sptree.cpp tsne.cpp tsne_main.cpp -o bh_tsne
To get an idea of its current performance, let’s test it on some real-world data, the GloVe (opens in new tab) word vectors:
$ curl -LO http://nlp.stanford.edu/data/glove.6B.zip % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 308 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 822M 100 822M 0 0 5661k 0 0:02:28 0:02:28 --:--:-- 3072k $ unzip glove.6B.zip Archive: glove.6B.zip inflating: glove.6B.50d.txt inflating: glove.6B.100d.txt inflating: glove.6B.200d.txt inflating: glove.6B.300d.txtWe have to strip out the first column, which is the words themselves:
$ cut -d' ' -f2- glove.6B.50d.txt >glove.6B.50d.cut.txtAnd now we can use the Python wrapper to run the tool:
$ ./bhtsne.py -v -r1 -i glove.6B.50d.cut.txt -o glove.6B.50d.tsne.txt Read the 400000 x 50 data matrix successfully! Using random seed: 1 Using no_dims = 2, perplexity = 50.000000, and theta = 0.500000 Computing input similarities... Building tree... - point 0 of 400000 ... - point 390000 of 400000 Input similarities computed in 21409.84 seconds (sparsity = 0.000682)! Learning embedding... Iteration 50: error is 131.843420 (50 iterations in 729.03 seconds) ... Iteration 999: error is 6.373127 (50 iterations in 654.72 seconds) Fitting performed in 15039.99 seconds. Wrote the 400000 x 2 data matrix successfully! ./bhtsne.py -v -r1 -i glove.6B.50d.cut.txt -o glove.6B.50d.tsne.txt 36697.76s user 228.13s system 99% cpu 10:17:09.79 totalWow, that took a long time! Let’s see if we can speed it up a bit.
Profiling
First, because premature optimization is the root of all evil, we should profile to see where the bottlenecks are. We don’t have all day, so let’s make a smaller input file for benchmarking, with just the first 1,000 points:
$ head -n1000 glove.6B.50d.cut.txt >glove.6B.50d.small.txtAnd use the Linux perf (opens in new tab) tool to sample the behavior of the program:
$ perf record --call-graph=dwarf ./bh_tsne‘perf report’ shows where it’s spending most of its time:
- 94.58% 0.00% bh_tsne bh_tsne [.] main - main - 94.57% TSNE::run - 87.76% TSNE::computeGradient - 49.31% SPTree::computeNonEdgeForces - 48.92% SPTree::computeNonEdgeForces + 47.42% SPTree::computeNonEdgeForces 21.25% SPTree::computeEdgeForces + 10.90% SPTree::SPTree + 6.11% SPTree::~SPTree + 4.02% TSNE::evaluateError + 2.10% TSNE::computeGaussianPerplexitySo it looks like SPTree::computeNonEdgeForces(), which happens to be the octree traversal, is the main candidate for optimization. Diving into the annotated assembly view of perf report by pressing A, a couple hotspots are evident.
Avoiding square roots
Let’s start with this one:
│ if(is_leaf || max_width / sqrt(D) < theta) { 5.24 │ sqrtsd %xmm2,%xmm4 │ ↓ ja 2115 <SPTree::computeNonEdgeForces 10.37 │18e: divsd %xmm4,%xmm1 0.45 │ comisd %xmm1,%xmm3 1.48 │ ↓ ja 20b0 <SPTree::computeNonEdgeForces(unsigned int, double, 1e0The criterion \(\frac{w_{\max}}{\sqrt{D}} < \theta \) determines whether the current node is a good enough approximation of its children, or if we have to recurse into it and handle its children separately.
Square roots and division are both very expensive operations, but because \(x^2\) is an increasing function on positive numbers, we can rewrite the inequality in the more efficient form \(w_{\max}^2 < \theta^2 D\). With this patch (opens in new tab), bhtsne is about 5% faster on this dataset.
Hoisting invariants
Now let’s move on to this hotspot:
│ double cur_width; │ for(unsigned int d = 0; d < dimension; d++) { │ cur_width = boundary->getWidth(d); │ max_width = (max_width > cur_width) ? max_width : cur_width; 10.98 │ cc: maxsd (%r9),%xmm1This loop is computing the maximum width of the node along any axis. But doesn’t that remain constant for the life of the node? We could compute that once for every node and store it in a field to save some work. Doing that (opens in new tab) makes it about another 5% faster.
Shrinking the footprint
Why exactly did the last patch help? ‘maxsd’ is not a slow operation (computing the maximum of two doubles) and it wasn’t a long loop (dimension == 2 in this case). In fact, it was slow for the same reason almost all code is slow these days: memory locality. The getWidth(d) call ultimately looks up the width in an array somewhere else in memory. As CPUs have gotten faster over the years, RAM has failed to keep pace. Levels of smaller and faster on-CPU caches have mitigated the issue, but the smaller your memory footprint is, and the fewer pointers you have to chase, the more of your working set will fit in lower levels of cache and the faster your program will be.
So, if we can reduce the memory footprint of the tree, we’ll probably be able to keep more of it in the cache, resulting in faster code. After reworking (opens in new tab) the implementation to use standard C++ containers instead of manual memory management, this patch (opens in new tab) introduces a smaller node structure. These changes make it almost 40% faster yet on our benchmark.
We can shrink the footprint a bit more by noticing that all the nodes at the same level as each other have the same sizes, just different positions, so they can share their width member. This patch (opens in new tab) gains us about another 5%.
Finally, by eliminating (opens in new tab) the scratch space array buff and just recomputing the values as necessary, we get almost another 10% faster. Altogether, we’re now over 70% faster on this benchmark.
Scaling up
Now let’s check how this scales to larger datasets.
Figure 1 shows that with 10,000 points, we’re over 100% faster, going from 3m09s to just 1m32s. But at 100,000 points, the benefit falls back down to 75%, going from 1h14m to 42m. The output hints that the bottleneck has started shifting to a different part of the program:
$ ./bhtsne.py -v -r1 -i glove.6B.50d.large.txt -o glove.6B.50d.tsne.txt ... Input similarities computed in 1388.20 seconds (sparsity = 0.002531)! ... Fitting performed in 1081.47 seconds. ... ./bhtsne.py -v -r1 -i glove.6B.50d.large.txt -o glove.6B.50d.tsne.txt 2529.04s user 1.70s system 99% cpu 42:19.12 total‘perf’ confirms that with more points, we’re spending more time in a different part of the program:
- 95.04% 0.26% bh_tsne bh_tsne [.] TSNE::run - 94.78% TSNE::run - 77.80% TSNE::computeGradient + 45.46% SPTree::computeNonEdgeForces 25.83% SPTree::computeEdgeForces + 4.71% SPTree::SPTree + 1.61% SPTree::~SPTree - 11.67% TSNE::computeGaussianPerplexity - 10.62% VpTree::search (inlined) + 10.23% VpTree ::search (inlined) 0.79% expf32x + 4.56% TSNE::evaluateError 0.69% TSNE::symmetrizeMatrix VpTree::search() is starting to show up in the profile, and its share of execution time only goes up as the number of points grows. This corresponds to the “Computing input similarities…” part of the program, which involves finding the nearest neighbors of every point in the dataset. A careful look at the profile shows that almost all that time is spent computing high-dimensional Euclidean distances:
+ 9.59% 0.00% bh_tsne bh_tsne [.] VpTree<DataPoint, &(euclidean_distance(DataPoint const&, DataPoint const&))>::search (inlined) + 9.02% 9.00% bh_tsne bh_tsne [.] euclidean_distanceUnfortunately, we can’t do the same thing we did the last time we saw a Euclidean distance computation. The code is using vantage-point trees (opens in new tab) to find nearest neighbors, which fundamentally rely on the metric (opens in new tab) properties of the distance function. Eliminating the square root makes the new “metric” violate the triangle inequality (opens in new tab) and would cause the VP tree to give incorrect results. Furthermore, the square root isn’t even the slowest part — most of the time is just spent iterating over the coordinates:
│ diff = (x1[d] - x2[d]); 6.08 │23: movsd (%rcx,%rax,8),%xmm1 21.08 │ subsd (%rsi,%rax,8),%xmm1 │ dd += diff * diff; 0.15 │ lea 0x1(%rax),%rdx 10.72 │ mulsd %xmm1,%xmm1 27.05 │ addsd %xmm1,%xmm0 │ for(int d = 0; d < t1._D; d++) { 18.73 │ cmp %rax,%rdi 0.00 │ ↑ jne 2e90 <euclidean_distance(DataPoint const&, 20 │ } │ return sqrt(dd); 0.46 │ ucomisd %xmm0,%xmm2 11.74 │ sqrtsd %xmm0,%xmm1High dimensional nearest neighbors is a hard problem at the best of times, because of the curse of dimensionality (opens in new tab). The best choice here might be to use an approximate nearest neighbors algorithm, or at least an all-nearest-neighbors algorithm that avoids wasted work. But I’ll save that for a future post.
Conclusion
So how did we do on the original, 400,000-point dataset?
$ ./bhtsne.py -v -r1 -i glove.6B.50d.cut.txt -o glove.6B.50d.tsne.txt Read the 400000 x 50 data matrix successfully! Using random seed: 1 Using no_dims = 2, perplexity = 50.000000, and theta = 0.500000 Computing input similarities... Building tree... - point 0 of 400000 ... - point 390000 of 400000 Input similarities computed in 21370.59 seconds (sparsity = 0.000682)! Learning embedding... Iteration 50: error is 131.843420 (50 iterations in 273.23 seconds) ... Iteration 999: error is 6.373127 (50 iterations in 239.38 seconds) Fitting performed in 5394.53 seconds. Wrote the 400000 x 2 data matrix successfully! ./bhtsne.py -v -r1 -i glove.6B.50d.cut.txt -o glove.6B.50d.tsne.txt 26989.04s user 59.58s system 99% cpu 7:32:10.40 totalWith all the above optimizations, we’ve saved 2 hours and 45 minutes on this computation, making it 36% faster. The second stage of the computation, where the optimizations apply, is over 178% percent faster, starting at over four hours and ending up under one and a half. These optimizations have been submitted upstream as a pull request (opens in new tab) and I hope to make more improvements in the future!
References
L.J.P. van der Maaten. Accelerating t-SNE using Tree-Based Algorithms. Journal of Machine Learning Research 15(Oct):3221-3245, 2014.
L.J.P. van der Maaten and G.E. Hinton. Visualizing High-Dimensional Data Using t-SNE. Journal of Machine Learning Research 9(Nov):2579-2605, 2008.
J. Barnes and P. Hut. A hierarchical O(N log N) force-calculation algorithm. Nature 324 (4): 446–449, 1986.