Kaggle-Store Sales-(回归+多表合并+xgboost模型)

发布于:2025-04-18 ⋅ 阅读:(24) ⋅ 点赞:(0)

Store Sales

题意:

给出很多商店,给出商店的类型,某时某刻卖了多少销售额。
给出了油价表,假期表,进货表。
让你求出测试集合中每个商店的销售额是多少。

数据处理:

1.由于是多表,所以要先把其他表与train合并。根据train和其他表共有的某一列特征值中,选择一个共有列作为key然后进行合并。注意,同时也要处理test数据,这样才能保证列一致。由于test的数据处理后也不能让行有变化,所以进行合并之前,其他表要先进行去重。
2.合并后进行缺失值处理,使用平均值还是众数还是前后值要根据不同情况来分析,就比如油价就应该按前后值填充。
3.对非数值型的特征值进行独热编码。

建立模型:

1.建立xgboost模型,设置网格参数进行网格搜索求出超参数。
2.对xgb_model训练的时候要先把Y_train取log,最后X_test要取exp。以免炸精度。

代码:
import sys
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn import model_selection
from sklearn.model_selection import GridSearchCV, train_test_split
from xgboost import XGBClassifier, XGBRegressor
import seaborn as sns

pd.set_option('display.width', 1000)
pd.set_option('display.max_colwidth', 1000)
pd.set_option("display.max_rows", 1000)
pd.set_option("display.max_columns", 1000)

#输出某一特征值与目标值的关系
def show(data,colx,coly):
    if data[colx].dtype == 'object' or data[colx].dtype == 'category' or len(data[colx].unique())<20:
        sns.boxplot(x=colx, y=coly, data=data)
    else:
        plt.scatter(data[colx], data[coly])
    plt.xlabel(colx)
    plt.ylabel(coly)
    plt.show()

if __name__ == '__main__':
    data_train = pd.read_csv('/kaggle/input/store-sales-time-series-forecasting/train.csv')
    data_test = pd.read_csv('/kaggle/input/store-sales-time-series-forecasting/test.csv')
    data_holiday = pd.read_csv('/kaggle/input/store-sales-time-series-forecasting/holidays_events.csv')
    data_oil = pd.read_csv('/kaggle/input/store-sales-time-series-forecasting/oil.csv')
    data_store = pd.read_csv('/kaggle/input/store-sales-time-series-forecasting/stores.csv')
    data_transaction = pd.read_csv('/kaggle/input/store-sales-time-series-forecasting/transactions.csv')

    #合并data_holiday
    data_holiday = data_holiday.drop_duplicates(subset=['date'])

    data_train = pd.merge(data_train, data_holiday, how='left', left_on='date', right_on='date')
    data_train = data_train.drop(columns=['type','locale','locale_name','description'],axis=1)
    data_train['transferred'] = data_train['transferred'].fillna(data_train['transferred'].mode()[0])

    data_test = pd.merge(data_test, data_holiday, how='left', left_on='date', right_on='date')
    data_test = data_test.drop(columns=['type', 'locale', 'locale_name', 'description'], axis=1)
    data_test['transferred'] = data_test['transferred'].fillna(data_test['transferred'].mode()[0])

    #合并data_oil
    data_oil = data_oil.drop_duplicates(subset=['date'])

    data_train = pd.merge(data_train, data_oil, how='left', left_on='date', right_on='date')
    data_train['dcoilwtico'] = data_train['dcoilwtico'].ffill().bfill()

    data_test = pd.merge(data_test, data_oil, how='left', left_on='date', right_on='date')
    data_test['dcoilwtico'] = data_test['dcoilwtico'].ffill().bfill()

    #合并data_store
    data_store = data_store.drop_duplicates(subset=['store_nbr'])

    data_train = pd.merge(data_train, data_store, how='left', left_on='store_nbr', right_on='store_nbr')

    data_test = pd.merge(data_test, data_store, how='left', left_on='store_nbr', right_on='store_nbr')

    #合并data_transaction
    data_transaction = data_transaction.drop_duplicates(subset=['store_nbr'])
    data_transaction = data_transaction.drop(columns=['date'],axis=1)

    data_train = pd.merge(data_train, data_transaction, how='left', left_on='store_nbr', right_on='store_nbr')

    data_test = pd.merge(data_test, data_transaction, how='left', left_on='store_nbr', right_on='store_nbr')

    #合并data_train和data_test
    data_all = pd.concat([data_train.drop(['id', 'sales'], axis=1), data_test.drop(['id'], axis=1)], axis=0)
    data_all['date'] = pd.to_datetime(data_all['date'], errors='coerce')
    data_all['date'] = data_all['date'].astype('int64')
	
	#独热编码
	data_all = pd.concat([data_all.drop(str_cols,axis=1),pd.get_dummies(data_all[str_cols])], axis=1)

    X_train = data_all[:data_train.shape[0]]
    Y_train = data_train['sales']
    X_train,X_val,Y_train,Y_val = train_test_split(X_train,Y_train,test_size=0.2,random_state=42)
    X_test = data_all[data_train.shape[0]:]

    xgb_param_grid = {
        'n_estimators': [50, 100, 200],  #树的数量
        'max_depth': [3, 4, 5, 6], #树的最大深度
        'learning_rate': [0.01, 0.1, 0.2], #学习速率
        'subsample': [0.8, 1.0], #指定每次迭代中用于训练每棵树的数据比例
    #    'colsample_bytree': [0.8, 1.0], #指定每次迭代中用于训练每棵树的特征比例
    #    'gamma': [0, 0.1, 0.2], #最小损失减少值
    #    'min_child_weight': [1, 3, 5], #子节点所需的最小样本权重和
    #    'reg_alpha': [0, 0.1, 1], #控制模型的正则化强度
    #    'reg_lambda': [0, 0.1, 1] #控制模型的正则化强度
    }
    xgb_model = GridSearchCV(
        estimator=XGBRegressor(random_state=42),  # 对什么模型进行搜索超参数
        param_grid=xgb_param_grid,  # 超参数的候选值
        cv=3,  # 使用3折交叉验证
        n_jobs=-1,  # 使用所有cpu并行运算
    )
    xgb_model.fit(X_train,np.log1p(Y_train))
    print('xgb预测分数:' + str(xgb_model.score(X_val, np.log1p(Y_val))))


    Submission = pd.DataFrame({
        'id': data_test['id'],
        'sales': np.exp(xgb_model.predict(X_test))
    })
    Submission.to_csv('/kaggle/working/Submission.csv', index=False)

网站公告

今日签到

点亮在社区的每一天
去签到