The reason you didn't get an error from your query is that data.table will reuse values when they're multiples of other values. In other words, because the 1
for am
can be used 2 times, it does this without telling you. If you were to do a query where the number of allowable values weren't multiples of each other then it would give you a warning. For example
DT[.(c(1,0),c(5,4,3),c(8,6,4))]
will give you a warning complaining about a remainder of 1 item, the same error you would see when typing data.table(c(1,0),c(5,4,3),c(8,6,4))
. Whenever merging X[Y]
, both X
and Y
should be thought of as data.tables.
If you instead use CJ
,
DT[CJ(c(1,0),c(5,4,3),c(8,6,4))]
then it will make every combination of all the values for you and data.table will give the results you expect.
From the vignette (bolding is mine):
What’s happening here? Read this again. The value provided for the
second key column “MIA” has to find the matching vlaues in dest key
column on the matching rows provided by the first key column origin.
We can not skip the values of key columns before. Therfore we provide
all unique values from key column origin. “MIA” is automatically
recycled to fit the length of unique(origin) which is 3.
Just for completeness, the vector scan syntax will work without using CJ
DT[am == 1 & gear == 4 & carb == 4]
or
DT[am == 1 & (gear == 3 | gear == 4) & (carb == 4 | carb == 2)]
How do you know if you need a binary search? If the speed of subsetting is unbearable then you need a binary search. For example, I've got a 48M row data.table I'm playing with and the difference between a binary search and a vector is staggering relative to one another. Specifically a vector scan takes 1.490 seconds in elapsed time but a binary search only takes 0.001 seconds. That, of course, assumes that I've already keyed the data.table. If I include the time it takes to set the key then the combination of setting the key and performing the subset is 1.628. So you have to pick your poison