I want to write a function that randomly picks elements from a training set, based on the bin probabilities provided. I divide the set indices to 11 bins, then create custom probabilities for them.
bin_probs = [0.5, 0.3, 0.15, 0.04, 0.0025, 0.0025, 0.001, 0.001, 0.001, 0.001, 0.001]
X_train = list(range(2000000))
train_probs = bin_probs * int(len(X_train) / len(bin_probs)) # extend probabilities across bin elements
train_probs.extend([0.001]*(len(X_train) - len(train_probs))) # a small fix to match number of elements
train_probs = train_probs/np.sum(train_probs) # normalize
indices = np.random.choice(range(len(X_train)), replace=False, size=50000, p=train_probs)
out_images = X_train[indices.astype(int)] # this is where I get the error
I get the following error:
TypeError: only integer scalar arrays can be converted to a scalar index with 1D numpy indices array
I find this weird, since I already checked the array of indices that I have created. It is 1-D, it is integer, and it is scalar.
What am I missing?
Note : I tried to pass indices
with astype(int)
. Same error.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…