Why does the speed of histcounts vary so much?

I have a vector with integer or floating point values and want to count the number of occurences.
histcounts() uses the lase element of the input Edge as upper limit for the last bin, so I have to append a number larger than the greatest element to count correctly. This slows down the processing by a factor of > 2, when the values are integers:
% Integer values:
x = round((1:1e6) / 1); % Equivalent for "/ 10" also
x = x(randperm(numel(x)));
ux = unique(x);
ux2 = [ux, Inf];
ux3 = [ux, 1e8]; % Any huge number
ux4 = [ux, ux(end) + 1];
timeit(@() histcounts(x, ux)) % 0.045
timeit(@() histcounts(x, ux2)) % 0.171 % Much slower!!!
timeit(@() histcounts(x, ux3)) % 0.199 % Slower than with Inf?
timeit(@() histcounts(x, ux4)) % 0.0787 % Fast again
% BinMethod=Integers is fast, but the number of bins is limited:
timeit(@() histcounts(x, 'BinMethod', 'Integers')) % 0.0059 Fast!
numel(histcounts(x, 'BinMethod', 'Integers')) % But just 10001 instead of 1e6
% According to the documentation, this should be 65536 ?!?
% Floating point values:
y = rand(1, 1e6);
uy = unique(y);
uy2 = [uy, Inf];
uy3 = [uy, uy(end) + 1];
timeit(@() histcounts(y, uy)) % 0.11148
timeit(@() histcounts(y, uy2)) % 0.2066
timeit(@() histcounts(y, uy3)) % 0.2100
% Local R2018b: similar behavior for integers, floating point values processed faster:
% 0.073 % Integer, ux
% 0.199 % [ux, Inf]
% 0.189 % [ux, 1e8]
% 0.061 % [ux, ux(end + 1)]
% 0.162 % Floating point, uy
% 0.167 % [uy, Inf]
% 0.315 % [uy, uy(end) + 1] % half speed!!!
I'd expect nearly the same speed, because it is almost the same amount of work.
By the way, a simple counting function is fast also:
timeit(@() simpleCount(x)) % 0.0602
timeit(@() simpleCount(y)) % 0.0628
function N = simpleCount(x)
S = sort(x);
q = [true, diff(S) ~= 0];
N = diff([find(q), numel(x) + 1]);
end
histcounts() with counting the last element to the previous bin is not useful, but attaching a final edge slows down the processing massively. I've sent an enhancement request to TWM already.

5 Comments

Interesting. I tried
ux2 = [ux ux(end)+1];
instead of
ux2 = [ux Inf];
and did not have the 2X slowdown in the integer case.
When an inf or ux(end)+1 edge is appended to the end of the edge vector, the final bin will only contain counts for the largest original edge value.
An alternative is to increase the size of the last bin rather than appending another edge value.
ux = unique(x);
ux(end) = ux(end)+1;
But when counting integers, all of this is avoided with the integers bin method.
histcounts(y,'BinMethod','integers')
@the cyclist: I've included your suggestion in the code. Yes, appending ux(end)+1 does not cause a slow down, but e.g. 1e8 does.
@Adam Danz: If the number of occurences of each number are wanted, expanding the last bin counts the last 2 numbers together:
N = histcounts(1:4, [1,2,3,5])
% But [1,1,1,1] is wanted
The documentation says, that BinMethod=Integers limits the bins to 2^16=65'536. For 1e6 numbers, 10'001 bins are used. It seems like "65'536 bins" means 65'537 edges:
N = histcounts(1:65537, 'BinMethod', 'Integers');
fprintf('Numel: %d Last count: %d\n', numel(N), N(end))
% Numel: 65537 Last count: 1
N = histcounts(1:65538, 'BinMethod', 'Integers');
fprintf('Numel: %d Last count: %d\n', numel(N), N(end))
% Numel: 6555 ??? Last count: 4 ???
Unfortunately, for this reason the integer binning mode does not work for 1e6 values. It is not documented what happens for > 2^16 bins, and my code cannot guarantee, that the number of different elements does not exceed this limit.
A simple C-mex function is 4 times faster than histcounts() without the limit of 2^16 bins:
// CountInts.c
// N = CountInts(x, low, high)
// Author: Jan, Heidelberg, 2021, License: CC BY-SA 3.0
#include "mex.h"
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
mwSignedIndex n, i, k, low, high;
double *X, *N;
X = mxGetDoubles(prhs[0]);
n = mxGetNumberOfElements(prhs[0]);
low = (mwSignedIndex) mxGetScalar(prhs[1]);
high = (mwSignedIndex) mxGetScalar(prhs[2]);
plhs[0] = mxCreateDoubleMatrix(1, high - low + 1, mxREAL);
N = mxGetDoubles(plhs[0]);
N -= low; // Shift pointer N such that X[i]=low increases N[0]:
for (i = 0; i < n; i++) {
k = (mwSignedIndex) X[i];
if (k >= low && k <= high) {
++(N[k]);
}
}
}
If N is created as UINT32 vector, this is even 10 times faster than histcounts().
Looks like groupcounts does not have this limitation
N = groupcounts((1:70000)');
fprintf('Numel: %d Last count: %d\n', numel(N), N(end))
Numel: 70000 Last count: 1
And its timing for counting integers is similar to the [ux, ux(end) + 1] method but slower.
x = round((1:1e6) / 1);
x = x(randperm(numel(x)));
ux = [unique(x), inf];
ux2 = [unique(x), max(x)+1];
timeit(@() histcounts(x, ux))
ans = 0.1085
timeit(@() histcounts(x, ux2))
ans = 0.0668
timeit(@() groupcounts(x'))
ans = 0.0862
isequal(histcounts(x,ux), histcounts(x, ux2), groupcounts(x')')
ans = logical
1
@Adam Danz: Thanks for this useful information.

Sign in to comment.

Answers (1)

It's just an assumption but histcounts probably use binary search on finite edge values. The large last edge probably penalizes the speed. Not sure how Inf is processed with bonary search (possibly an extra comparison check?)

2 Comments

I agree. It looks like histcounts() applies some educated guesses about the bins, but in the case of Inf as final element this is too smart.
The limitation to 2^16 (or 2^16+1) bins for the BinMethod set to 'Integers' sounds like a lookup table method is applied. As far as I can see it is not documented, what histcounts() does for more than 2^16 elements. A simple LUT approach is faster:
x = randi([0, 255], 1, 1e6);
timeit(@() histcounts(x, 'BinMethod', 'Integers'))
ans = 0.0081
timeit(@() lutCount(x, 0, 255))
ans = 0.0036
% Fairer test without knowing the limits in advance:
timeit(@() lutCount(x, min(x), max(x)))
ans = 0.0047
The alternative accumarray is not faster also:
timeit(@() accumarray(x.' - (min(x) - 1), 1))
ans = 0.0047
function N = lutCount(x, low, high)
N = zeros(1, high - low + 1);
pad = 1 - low;
for k = 1:numel(x)
a = x(k) + pad;
N(a) = N(a) + 1; % Evil: No checks of array limits...
end
end
With the CountInts.c-Mex function from my other comment, the speedup is higher.
My conclusion: histcounts is not eficient for counting elements, when no smart binning is needed. The BinMethod 'Integers' is useful for the case, when it can be guaranteed, that 2^16 bins are enough - in other words: This is a cause of silent bugs.
NOTE: Speed measurements in Matlab online are fragile. Please repeat this on a local nachine.
"The BinMethod 'Integers' is useful for the case, when it can be guaranteed, that 2^16 bins are enough"
Just take your hand over and use accumarray instead.

Sign in to comment.

Categories

Products

Release

R2021a

Asked:

Jan
on 22 Aug 2021

Commented:

Jan
on 22 Aug 2021

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!