Kaggle-Housing Prices-(回归+Ridge,Lasso,Xgboost模型融合)

发布于:2025-04-11 ⋅ 阅读:(33) ⋅ 点赞:(0)

Housing Prices

题意:

给出房子的各种特性,让你预测如今房子的价格。

思考:

数据处理:

1.用plt查看散点图,选择对价格影响高的特征值:YearBuilt,YearRemodAdd,GarageYrBlt。但是不能直接用时间,更好的是用房子的售卖时间减去这些初始时间,这样更有参考度。

2.筛选出数值型列,null值用平均值填充筛。选出非数值型列,null值用众数填充。

3.选出所有非object类型的特征值名称,正态化处理,取对数,标准化处理。避免数据炸了,消除量纲差异。

4.最后进行独热编码即可。

建立模型:

1.从all_data中分离出模型训练的train数据和test数据,以及Y标签。

2.定义均方根误差函数,使用5折交叉验证,去评估模型。

3.建立Ridge模型,先预处理设置alphas通过训练求出最佳的alphas,打出表格查看最佳参数。给Ridge模型传入数据进行训练,求出测试集的答案,并且求expm1,这是因为前面我们训练的时候把Y标签的数据已经取了log,所以这里要expm回来。然后输出答案为csv文件,然后print看一下Ridge这个模型对本数据的均值mean和标准差std。

4.建立Lasso模型,这里直接传入一些参数lasso会自行选择最佳alpha。同理求出预测结果,然后输出模型的mean和std。

5.建立Xgboost模型,首先使用DMatirx构造出xgboost专属的train和test数据集,用xgb_cv交叉验证来训练 xgboost模型求出最佳迭代参数,然后传入xgb.XGBRegressor模型。同理求出预测结果,然后输出模型的mean和std。

6.建立以Xgboost为主ridge、lasso为辅的模型。使用StackingCVRegressor融合xgboost、ridge、lasso,同理求出预测结果,然后输出模型的mean和std。

7.模型融合,分别求出ridge、lasso、xgb、xgb_r_l,的预测结果,分别分配权重然后求出融合结果。

代码:

import sys
import numpy as np
import pandas as pd
import xgboost
from matplotlib import pyplot as plt
from scipy.stats import skew
from sklearn.linear_model import Ridge, LassoCV
from sklearn.model_selection import cross_val_score
from mlxtend.regressor import StackingCVRegressor
import warnings

warnings.filterwarnings("ignore")
pd.set_option('display.width', 1000)
pd.set_option('display.max_colwidth', 1000)
pd.set_option("display.max_rows", 1000)
pd.set_option("display.max_columns", 1000)


# 数据初始化函数
def init_data(train_data, test_data):
    # 把id和saleprice删掉然后再拼接一起
    all_data = pd.concat([train_data.drop(['Id', 'SalePrice'], axis=1), test_data.drop(['Id'], axis=1)], axis=0)
    # 把房子的各种时间与房子售卖时间做差
    for i in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
        all_data[i] = all_data['YrSold'] - all_data[i]

    # 筛选出数值型列,null值用平均值填充
    numeric_cols = all_data.select_dtypes(include=np.number).columns
    for col in numeric_cols:
        all_data[col] = all_data[col].fillna(all_data[col].mean())
    # 筛选出非数值型列,null值用众数填充
    non_numeric_cols = all_data.select_dtypes(exclude=np.number).columns
    for col in non_numeric_cols:
        all_data[col] = all_data[col].fillna(all_data[col].mode()[0])

    # 正态化
    train_data["SalePrice"] = np.log1p(train_data["SalePrice"])
    numeric_feats = all_data.dtypes[all_data.dtypes != 'object'].index  # 非object的特征个数
    skewed_feats = train_data[numeric_feats].apply(lambda x: skew(x.dropna()))
    skewed_feats = skewed_feats[skewed_feats > 0.75]
    skewed_feats = skewed_feats.index
    all_data[skewed_feats] = np.log1p(all_data[skewed_feats])
    # 连续值归一化
    all_data[numeric_feats] = all_data[numeric_feats].apply(lambda x: (x - x.mean()) / (x.std()))

    # 独热编码
    all_data = pd.get_dummies(all_data)

    return all_data


# 计算不同模型下在X_train与Y这组测试数据中的水平
def rmse_cv(model, X_train, Y):
    rmse = np.sqrt(-cross_val_score(model, X_train, Y, scoring='neg_mean_squared_error', cv=5))
    return rmse


if __name__ == '__main__':
    train_data = pd.read_csv('/kaggle/input/home-data-for-ml-course/train.csv')
    test_data = pd.read_csv('/kaggle/input/home-data-for-ml-course/test.csv')
    all_data = init_data(train_data, test_data)
    X_train = all_data[:train_data.shape[0]]
    X_test = all_data[train_data.shape[0]:]
    Y = train_data['SalePrice']

    # 建立Ridge回归模型
    alphas = [0.05, 0.1, 0.3, 1, 3, 5, 10, 15, 30, 50, 75]
    cv_ridge = [rmse_cv(Ridge(alpha=i), X_train, Y).mean() for i in alphas]
    cv_ridge = pd.Series(cv_ridge, index=alphas)
    cv_ridge.plot(title="Validation")
    plt.xlabel("Alpha")
    plt.ylabel("RMSE")
    plt.show()
    model_ridge = Ridge(alpha=10)
    ridge_model_after_fit = model_ridge.fit(X_train, Y)
    ridge_preds = np.expm1(model_ridge.predict(X_test))
    solution = pd.DataFrame({"id": test_data['Id'], "SalePrice": ridge_preds})
    solution.to_csv('ridge_sol.csv', index=False)
    print("model_ridge:" + str(rmse_cv(model_ridge, X_train, Y).mean()) + "   " + str(
        rmse_cv(model_ridge, X_train, Y).std()))

    # 建立Lasso回归模型
    model_lasso = LassoCV(alphas=[20, 10, 1, 0.1, 0.01, 0.001, 0.0005]).fit(X_train, Y)
    lasso_preds = np.expm1(model_lasso.predict(X_test))
    solution = pd.DataFrame({"id": test_data['Id'], "SalePrice": lasso_preds})
    solution.to_csv('lasso_sol.csv', index=False)
    print("model_lasso:" + str(rmse_cv(model_lasso, X_train, Y).mean()) + "   " + str(
        rmse_cv(model_lasso, X_train, Y).std()))

    # 建立xgboost回归模型
    xgb_cv_train = xgboost.DMatrix(X_train, label=Y)
    xgb_cv_test = xgboost.DMatrix(X_test)
    params = {"objective": "reg:squarederror",
              "max_depth": 4,
              "colsample_bylevel": 0.5,
              "learning_rate": 0.1,
              "random_state": 20}
    model_xgb_test = xgboost.cv(
        params,
        xgb_cv_train,
        num_boost_round=500,
        early_stopping_rounds=100,
        as_pandas=True
    )
    model_xgb = xgboost.XGBRegressor(n_estimators=model_xgb_test["test-rmse-mean"].idxmin(), max_depth=4, learning_rate=0.3)
    model_xgb.fit(X_train, Y)
    xgb_preds = np.expm1(model_xgb.predict(X_test))
    solution = pd.DataFrame({"id": test_data['Id'], "SalePrice": xgb_preds})
    solution.to_csv('xgb_sol.csv', index=False)
    print("model_xgb:" + str(rmse_cv(model_xgb, X_train, Y).mean()) + "   " + str(rmse_cv(model_xgb, X_train, Y).std()))

    # xgboost为主、ridge与lasso为辅
    model_x_r_l = StackingCVRegressor(regressors=(model_ridge, model_lasso, model_xgb),
                                      meta_regressor=model_xgb,
                                      use_features_in_secondary=True)
    model_x_r_l.fit(X_train, Y)
    model_x_r_l_preds = np.expm1(model_x_r_l.predict(X_test))
    solution = pd.DataFrame({"id": test_data['Id'], "SalePrice": model_x_r_l_preds})
    solution.to_csv('model_x_r_l.csv', index=False)
    print("model_x_r_l:" + str(rmse_cv(model_x_r_l, X_train, Y).mean()) + "   " + str(
        rmse_cv(model_x_r_l, X_train, Y).std()))

    # 尝试混合模型
    preds1 = model_ridge.predict(X_test)
    preds2 = model_lasso.predict(X_test)
    preds3 = model_xgb.predict(X_test)
    preds4 = model_x_r_l.predict(X_test)
    combine_preds = 0.4 * preds1 + 0.6 * preds4
    combine_preds = np.floor(np.expm1(combine_preds))
    solution = pd.DataFrame({"id": test_data['Id'], "SalePrice": combine_preds})
    solution.to_csv('/kaggle/working/Submission.csv', index=False)