I'm running the following code
import xgboost as xgb
from sklearn.datasets import fetch_covtype
from sklearn.model_selection import train_test_split
import time
# Fetch dataset using sklearn
cov = fetch_covtype()
X = cov.data
y = cov.target
# Create 0.75/0.25 train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, train_size=0.75,
random_state=42)
# Specify sufficient boosting iterations to reach a minimum
num_round = 3000
# Leave most parameters as default
param = {'objective': 'multi:softmax', # Specify multiclass classification
'num_class': 8, # Number of possible output classes
'tree_method': 'gpu_hist' # Use GPU accelerated algorithm
}
# Convert input data from numpy to XGBoost format
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)
gpu_res = {} # Store accuracy result
tmp = time.time()
# Train model
xgb.train(param, dtrain, num_round, evals=[(dtest, 'test')], evals_result=gpu_res)
print("GPU Training Time: %s seconds" % (str(time.time() - tmp)))
# Repeat for CPU algorithm
tmp = time.time()
param['tree_method'] = 'hist'
cpu_res = {}
xgb.train(param, dtrain, num_round, evals=[(dtest, 'test')], evals_result=cpu_res)
print("CPU Training Time: %s seconds" % (str(time.time() - tmp)))
I'm getting the following error:
[bt] (0)
/home/darfeder/anaconda3/lib/python3.8/site-packages/xgboost/lib/libxgboost.so(+0xa0c64)
[0x7f6c9bba4c64] [bt] (1)
/home/darfeder/anaconda3/lib/python3.8/site-packages/xgboost/lib/libxgboost.so(+0x168b73)
[0x7f6c9bc6cb73] [bt] (2)
/home/darfeder/anaconda3/lib/python3.8/site-packages/xgboost/lib/libxgboost.so(+0x168e82)
[0x7f6c9bc6ce82] [bt] (3)
/home/darfeder/anaconda3/lib/python3.8/site-packages/xgboost/lib/libxgboost.so(+0x1994f7)
[0x7f6c9bc9d4f7] [bt] (4)
/home/darfeder/anaconda3/lib/python3.8/site-packages/xgboost/lib/libxgboost.so(XGBoosterUpdateOneIter+0x59)
[0x7f6c9bb94d39] [bt] (5)
/home/darfeder/anaconda3/lib/python3.8/lib-dynload/../../libffi.so.7(+0x69dd)
[0x7f6cee5279dd] [bt] (6)
/home/darfeder/anaconda3/lib/python3.8/lib-dynload/../../libffi.so.7(+0x6067)
[0x7f6cee527067] [bt] (7)
/home/darfeder/anaconda3/lib/python3.8/lib-dynload/_ctypes.cpython-38-x86_64-linux-gnu.so(+0x1097a)
[0x7f6cee53d97a] [bt] (8)
/home/darfeder/anaconda3/lib/python3.8/lib-dynload/_ctypes.cpython-38-x86_64-linux-gnu.so(+0x110db)
[0x7f6cee53e0db]
I suspect that it's caused by incompatibility between my current python version: 3.8.3
(default, Jul 2 2020, 16:21:59) [GCC 7.3.0]
and the cuda installation:
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA
Corporation Built on Wed_Oct_23_19:24:38_PDT_2019 Cuda compilation
tools, release 10.2, V10.2.89
question from:
https://stackoverflow.com/questions/65623062/python-xgboost-gpu-error-check-failed-gpu-predictor 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…