失眠网,内容丰富有趣,生活中的好帮手!
失眠网 > Kaggle - Titanic 生存预测

Kaggle - Titanic 生存预测

时间:2023-02-25 05:20:27

相关推荐

Kaggle - Titanic 生存预测

第一次参加Kaggle,以Titanic来入个门。本次竞赛的目的是根据Titanic的人员信息来预测最终的生存情况。采用Python3来完成本次竞赛。

一、数据总览

从Kaggle平台我们了解到,Training set一共有891条记录,Test set一共有418条记录。提供的相关变量有:

首先查看一下训练集和测试集的基本信息,对数据的规模、各个特征的数据类型以及是否有缺失,有一个总体的了解:

import pandas as pd import numpy as npimport refrom sklearn.feature_selection import chi2from sklearn.feature_extraction import DictVectorizerfrom sklearn.linear_model import LogisticRegressionfrom sklearn.cross_validation import cross_val_score#读取数据train = pd.read_csv('/Users/jingxuan.ljx/Documents/machine learning/kaggle/Titanic/train.csv')test = pd.read_csv('/Users/jingxuan.ljx/Documents/machine learning/kaggle/Titanic/test.csv')train_test_combined = train.append(test,ignore_index=True)#查看基本信息print (train.info())print (test.info())

输出为:

<class 'pandas.core.frame.DataFrame'>RangeIndex: 891 entries, 0 to 890Data columns (total 12 columns):PassengerId 891 non-null int64Survived 891 non-null int64Pclass 891 non-null int64Name 891 non-null objectSex 891 non-null objectAge 714 non-null float64SibSp891 non-null int64Parch891 non-null int64Ticket 891 non-null objectFare 891 non-null float64Cabin204 non-null objectEmbarked 889 non-null objectdtypes: float64(2), int64(5), object(5)memory usage: 83.6+ KB<class 'pandas.core.frame.DataFrame'>RangeIndex: 418 entries, 0 to 417Data columns (total 11 columns):PassengerId 418 non-null int64Pclass 418 non-null int64Name 418 non-null objectSex 418 non-null objectAge 332 non-null float64SibSp418 non-null int64Parch418 non-null int64Ticket 418 non-null objectFare 417 non-null float64Cabin91 non-null objectEmbarked 418 non-null objectdtypes: float64(2), int64(4), object(5)memory usage: 36.0+ KB

可知:训练集中Age、Cabin和Embarked这三个变量有缺失,测试集中Age、Cabin和Fare这三个变量有缺失。

接下来我们再查看一下数据的具体格式:

#默认打印出前5行数据print (train.head())

我使用的是Sublime编辑器,因为列数太多,会分多行打印,输出结果不太美观。因此直接去Kaggle上查看数据,以下为Kaggle上的数据截图。

二、数据初步分析

1. 乘客基本属性分析

对于Survived、Sex、Pclass、Embarked这些分类变量,采用饼图来分析它们的构成比。对于Sibsp、Parch这些离散型数值变量,采用柱状图来显示它们的分布情况。对于Age、Fare这些连续型数值变量,采用直方图来显示它们的分布情况。

# 绘制分类变量的饼图# labeldistance,文本的位置离远点有多远,1.1指1.1倍半径的位置# autopct,圆里面的文本格式,%3.1f%%表示小数有三位,整数有一位的浮点数# shadow,饼是否有阴影# startangle,起始角度,0,表示从0开始逆时针转,为第一块。一般选择从90度开始比较好看# pctdistance,百分比的text离圆心的距离plt.subplot(2,2,1)survived_counts = train['Survived'].value_counts()survived_labels = ['Died','Survived']plt.pie(x=survived_counts,labels=survived_labels,autopct="%5.2f%%", pctdistance=0.6,shadow=False, labeldistance=1.1, startangle=90)plt.title('Survived')#设置显示的是一个正圆plt.axis('equal')#plt.show()plt.subplot(2,2,2)gender_counts = train['Sex'].value_counts()plt.pie(x=gender_counts,labels=gender_counts.keys(),autopct="%5.2f%%", pctdistance=0.6,shadow=False, labeldistance=1.1, startangle=90)plt.title('Gender')plt.axis('equal')plt.subplot(2,2,3)pclass_counts = train['Pclass'].value_counts()plt.pie(x=pclass_counts,labels=pclass_counts.keys(),autopct="%5.2f%%", pctdistance=0.6,shadow=False, labeldistance=1.1, startangle=90)plt.title('Pclass')plt.axis('equal')plt.subplot(2,2,4)embarked_counts = train['Embarked'].value_counts()plt.pie(x=embarked_counts,labels=embarked_counts.keys(),autopct="%5.2f%%", pctdistance=0.6,shadow=False, labeldistance=1.1, startangle=90)plt.title('Embarked')plt.axis('equal')plt.show()plt.subplot(2,2,1)sibsp_counts = train['SibSp'].value_counts().to_dict()plt.bar(list(sibsp_counts.keys()),list(sibsp_counts.values()))plt.title('SibSp')plt.subplot(2,2,2)parch_counts = train['Parch'].value_counts().to_dict()plt.bar(list(parch_counts.keys()),list(parch_counts.values()))plt.title('Parch')plt.style.use( 'ggplot')plt.subplot(2,2,3)plt.hist(train.Age,bins=np.arange(0,100,5),range=(0,100),color = 'steelblue', edgecolor = 'k')plt.title('Age')plt.subplot(2,2,4)plt.hist(train.Fare,bins=20,color = 'steelblue', edgecolor = 'k')plt.title('Fare')plt.show()

2. 分析不同因素与生存情况之间的关系

(1)性别:

计算不同性别的生存率:

print (train.groupby('Sex')['Survived'].value_counts())print (train.groupby('Sex')['Survived'].mean())

输出为:

SexSurvivedfemale 1 2330 81male 0 4681 109Sexfemale 0.742038male0.188908

可知:女性的生存率为74.20%,男性的生存率仅为18.89%,女性的生存率远大于男性,因此性别是一个重要的影响因素。

(2)年龄:

计算不同年龄的生存率:

fig, axis1 = plt.subplots(1,1,figsize=(18,4))train_age = train.dropna(subset=['Age'])train_age["Age_int"] = train_age["Age"].astype(int)train_age.groupby('Age_int')['Survived'].mean().plot(kind='bar')plt.show()

输出为:

可知:小孩子的生存率较高,老年人中有好几个年龄段的生存率都为0,生存率较低。我们再看一下每个年龄段具体的幸存者和非幸存者的人数分布。

print (train_age.groupby('Age_int')['Survived'].value_counts())

输出为:

Age_int Survived0 1 71 1 50 22 0 71 33 1 50 14 1 70 35 1 46 1 20 17 0 21 18 0 21 29 0 61 210 0 211 0 31 112 1 113 1 214 0 41 315 1 40 116 0 111 617 0 71 618 0 171 919 0 161 920 0 131 321 0 191 522 0 161 1123 0 111 524 0 161 1525 0 171 626 0 121 627 1 110 728 0 201 729 0 121 830 0 171 1031 0 91 832 0 101 1033 0 91 634 0 101 635 1 110 736 0 121 1137 0 51 138 0 61 539 0 91 540 0 91 641 0 41 242 0 71 643 0 41 144 0 61 345 0 91 546 0 347 0 81 148 1 60 349 1 40 250 0 51 551 0 51 252 0 31 353 1 154 0 51 355 0 21 156 0 21 257 0 258 1 30 259 0 260 0 21 261 0 362 0 21 263 1 264 0 265 0 366 0 170 0 371 0 274 0 180 1 1

接下来我们考虑将年龄分成几个年龄段,分别计算不同年龄段的生存率。0-1岁的小孩子生存率为100%,可以考虑将它单独作为一组,然后再分为1-15,15-55,>55这三个组。

train_age['Age_derived'] = pd.cut(train_age['Age'], bins=[0,0.99,14.99,54.99,100])print (train_age.groupby('Age_derived')['Survived'].value_counts())print (train_age.groupby('Age_derived')['Survived'].mean())

输出为:

Age_derivedSurvived(0.0, 0.99]1 7(0.99, 14.99] 1 380 33(14.99, 54.99] 0 3621 232(54.99, 100.0] 0 291 13Age_derived(0.0, 0.99] 1.000000(0.99, 14.99]0.535211(14.99, 54.99] 0.390572(54.99, 100.0] 0.309524

可知:小孩子的生存率较成年人和老年人高。

(3) 船舱等级:

计算不同船舱等级的生存率:

print (train.groupby('Pclass')['Survived'].value_counts())print (train.groupby('Pclass')['Survived'].mean())

输出为:

Pclass Survived1 1 1360 802 0 971 873 0 3721 119Pclass1 0.6296302 0.4728263 0.242363

可知:一等舱的生存率为62.96%,二等舱的生存率为47.28%,三等舱的生存率为24.24%。因此船舱等级也是影响生存情况的一个重要因素。

(4)登船港口:

计算不同登船港口的乘客的生存率:

print (train.groupby('Embarked')['Survived'].value_counts())print (train.groupby('Embarked')['Survived'].mean())

输出为:

Embarked SurvivedC 1 930 75Q 0 471 30S 0 4271 217EmbarkedC 0.553571Q 0.389610S 0.336957

可知:港口C的生存率为55.36%,港口Q的生存率为38.96%,港口S的生存率为33.70%。港口C的生存率较高,因此登船港口可能为影响生存率的一个因素。

(5)有无兄弟姐妹及配偶 Sibsp、有无父母子女 Parch

计算不同Sibsp和不同Parch的生存率:

print (train.groupby('SibSp')['Survived'].value_counts())print (train.groupby('SibSp')['Survived'].mean())print (train.groupby('Parch')['Survived'].value_counts())print (train.groupby('Parch')['Survived'].mean())

输出为:

SibSp Survived00 3981 21011 1120 9720 151 1330 121 440 151 350 580 7SibSp0 0.3453951 0.5358852 0.4642863 0.2500004 0.1666675 0.0000008 0.000000Parch Survived00 4451 23311 650 5320 401 4031 30 240 450 41 160 1Parch0 0.3436581 0.5508472 0.5000003 0.6000004 0.0000005 0.2000006 0.000000

可知:独自一人的生存率较低,但如果亲人太多,生存率也会降低。

(6)Cabin:

Cabin的缺失率很高,无法做缺失值填补。暂时将它分为缺失和非缺失两种情况,分别计算生存率。

train.loc[train['Cabin'].isnull(),'Cabin_derived'] = 'Missing'train.loc[train['Cabin'].notnull(),'Cabin_derived'] = 'Not Missing'print (train.groupby('Cabin_derived')['Survived'].value_counts())print (train.groupby('Cabin_derived')['Survived'].mean())

输出为:

Cabin_derived SurvivedMissing 0 4811 206Not Missing 1 1360 68Cabin_derivedMissing 0.299854Not Missing 0.666667

可知:Cabin缺失的乘客的生存率为29.99%,非缺失的乘客的生存率为66.67%,因此Cabin缺失与否可能与生存情况有关。

(7)费用Fare

先看一下生还者和未生还者的费用之间是否有区别:

print (train['Fare'][train['Survived'] == 0].describe())print (train['Fare'][train['Survived'] == 1].describe())

输出为:

count 549.000000mean22.117887std 31.388207min 0.00000025% 7.8540% 10.50000075% 26.000000max263.000000count 342.000000mean48.395408std 66.596998min 0.00000025% 12.47500050% 26.00000075% 57.000000max512.329200

可知,生还者的费用中位数为26,未生还者的费用中位数为10,两者之间差别比较明显。

(8)Name:

第一感觉是每个人的名字都不一样,因此这个特征没什么太大价值。其实这个观点大错特错,在Titanic中,Name这个因素很重要,可以从中提取很重要的信息。首先我们来看一下Name具体是什么形式的:

print (train.Name)

输出为:

0 Braund, Mr. Owen Harris1Cumings, Mrs. John Bradley (Florence Briggs Th...2 Heikkinen, Miss. Laina3 Futrelle, Mrs. Jacques Heath (Lily May Peel)4 Allen, Mr. William Henry5 Moran, Mr. James6 McCarthy, Mr. Timothy J7Palsson, Master. Gosta Leonard8Johnson, Mrs. Oscar W (Elisabeth Vilhelmina Berg)9Nasser, Mrs. Nicholas (Adele Achem)10 Sandstrom, Miss. Marguerite Rut11Bonnell, Miss. Elizabeth12 Saundercock, Mr. William Henry13 Andersson, Mr. Anders Johan14 Vestrom, Miss. Hulda Amanda Adolfina15 Hewlett, Mrs. (Mary D Kingcome) 16 Rice, Master. Eugene17Williams, Mr. Charles Eugene18Vander Planke, Mrs. Julius (Emelia Maria Vande...19 Masselmani, Mrs. Fatima...

可知:Name里面包含称呼:Mr., Mrs., Miss., Master.等等。因此我们先试着提取出一个独立的特征:Title

train['Title'] = train['Name'].map(lambda x: pile(", (.*?)\.").findall(x)[0])print (train['Title'].value_counts())

输出为:

Mr 517Miss 182Mrs 125Master 40Dr7Rev6Col2Major 2Mlle 2Mme1Ms1Don1Sir1Jonkheer1Capt 1Lady 1the Countess1Name: Title, dtype: int64

对于Master这个称呼,人数比较多,我们来看一下它代表的是哪一部分人群:

print (train[train['Title'] == 'Master'][['Survived','Title','Sex','Parch','SibSp','Fare','Age','Embarked']])

输出为:

Survived Title Sex Parch SibSpFare Age Embarked7 0 Master male13 21.0750 2.00 S160 Master male14 29.1250 2.00 Q500 Master male14 39.6875 7.00 S590 Master male25 46.9000 11.00 S630 Master male23 27.9000 4.00 S651 Master male11 15.2458 NaN C781 Master male20 29.0000 0.83 S125 1 Master male01 11.2417 12.00 C159 0 Master male28 69.5500 NaN S164 0 Master male14 39.6875 1.00 S165 1 Master male20 20.5250 9.00 S171 0 Master male14 29.1250 4.00 Q176 0 Master male13 25.4667 NaN S...

可以看出,Master代表的是小男孩。

Title的种类比较多,我们把它们合并一下,再分析不同Title的生存率是否有差别:

train['Title'] = train['Title'].replace(['Lady', 'the Countess','Capt', 'Col','Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')train['Title'] = train['Title'].replace(['Mlle', 'Ms'],'Miss')train['Title'] = train['Title'].replace('Mme','Mrs')

输出为:

Title SurvivedMaster 1 230 17Miss 1 1300 55Mr0 4361 81Mrs1 1000 26Rare 0 151 8TitleMaster 0.575000Miss0.702703Mr 0.156673Mrs 0.793651Rare0.347826

可知:Title是影响生存率的一个因素。

三、数据预处理

包括缺失值填补、连续型数值变量的离散化、分类变量的dummy过程。数据预处理的时候,我们将训练集和测试集合并在一起进行。

1. 缺失值填补

从上面的分析中我们知道,训练集中Age、Cabin和Embarked这三个变量有缺失,测试集中Age、Cabin和Fare这三个变量有缺失。其中Cabin缺失率(>70%)太高,我们不进行填补。

(1)填补Embarked:

Embarked变量为分类变量,是指登船港口,可取值为:C,Q, S。我们使用出现频率最高的特征值填补。

train_test_combined['Embarked'].fillna(train_test_combined['Embarked'].mode().iloc[0], inplace=True)

(2)填补Fare:

Fare是一个数值型变量,我们根据不同Pclass的Fare均值来进行缺失值填补。

train_test_combined['Fare'] = train_test_combined[['Fare']].fillna(train_test_combined.groupby('Pclass').transform('mean'))

(3)填补Age:

Age是一个数值型变量,我们根据不同Title(Mr, Mrs, Miss, Master等)的年龄均值来进行缺失值填补。

train_test_combined['Title'] = train_test_combined['Name'].map(lambda x: pile(", (.*?)\.").findall(x)[0])train_test_combined['Age'] = train_test_combined[['Age']].fillna(train_test_combined.groupby('Title').transform('mean'))

填补完缺失值后,我们再看一下数据的基本情况:

print (train_test_combined.info())

输出为:

Data columns (total 13 columns):Age 1309 non-null float64Cabin295 non-null objectEmbarked 1309 non-null objectFare 1309 non-null float64Name 1309 non-null objectParch1309 non-null int64PassengerId 1309 non-null int64Pclass 1309 non-null int64Sex 1309 non-null objectSibSp1309 non-null int64Survived 891 non-null float64Ticket 1309 non-null objectTitle1309 non-null objectdtypes: float64(3), int64(4), object(6)

2. 连续型数值变量的离散化

(1)Age:

通过前面分析年龄和生存情况之间的关系,我们将年龄分为<1,1-<15,15-<55,>=55这四个年龄段。

train_test_combined['Age_derived'] = pd.cut(train_test_combined['Age'], bins=[0,0.99,14.99,54.99,100],labels=['baby','child','adult','older'])age_dummy = pd.get_dummies(train_test_combined['Age_derived']).rename(columns=lambda x: 'Age_' + str(x))train_test_combined = pd.concat([train_test_combined,age_dummy],axis=1)

(2)Fare:

通过分析Ticket可知,有些人的Ticket号相同,存在团体票,所以需要将团体票价均分到每个人。

print (train_test_combined.Ticket.value_counts())

输出为:

CA. 2343 11CA 214481601 8347082 731012957PC 176087S.O.C. 14879 7347077 719950 6347088 6113781 6382652 6...

均分团体票价:

train_test_combined['Group_ticket'] = train_test_combined['Fare'].groupby(by=train_test_combined['Ticket']).transform('count')train_test_combined['Fare'] = train_test_combined['Fare']/train_test_combined['Group_ticket']

查看Fare的均值、中位数等统计量:

print (train_test_combined['Fare'].describe())

输出为:

count 1309.000000mean 14.756516std 13.550515min 0.00000025% 7.55000050% 8.05000075% 15.000000max 128.082300Name: Fare, dtype: float64

我们以P25, P75将Fare分为三档,Low_fare: <=7.55, Median_fare: 7.55-15.00, High_fare: >15.00

train_test_combined['Fare_derived'] = pd.cut(train_test_combined['Fare'], bins=[-1,7.55,15.00,130], labels=['Low_fare','Median_fare','High_fare'])fare_dummy = pd.get_dummies(train_test_combined['Fare_derived']).rename(columns=lambda x: str(x))train_test_combined = pd.concat([train_test_combined,fare_dummy],axis=1)

3. Famliy Size

Sibsp和Parch都是反映亲人的数量,我们可以将这两个变量的值加起来,形成一个新的变量Family_size。

train_test_combined['Family_size'] = train_test_combined['Parch'] + train_test_combined['SibSp']print (train_test_combined.groupby('Family_size')['Survived'].value_counts())print (train_test_combined.groupby('Family_size')['Survived'].mean())

输出为:

Family_size Survived0 0 3741 1631 1 890 722 1 590 433 1 210 84 0 121 35 0 191 36 0 81 47 0 610 0 7Family_size00.30353810.55279520.57843130.72413840.20000050.13636460.33333370.00000010 0.000000

可以看出独自一人或者family size过大,生存率均较低。我们根据Family_size的值分为三类:Single, Small family, Large family。

def family_size_category(Family_size):if Family_size == 0:return 'Single'elif Family_size <=3:return 'Small family'else:return 'Large family'train_test_combined['Family_size_category'] = train_test_combined['Family_size'].map(family_size_category)family_dummy = pd.get_dummies(train_test_combined['Family_size_category']).rename(columns=lambda x: str(x))train_test_combined = pd.concat([train_test_combined,family_dummy],axis=1)

4. Title

根据Name提取出title特征

train_test_combined['Title'] = train_test_combined['Name'].map(lambda x: pile(", (.*?)\.").findall(x)[0])train_test_combined['Title'] = train_test_combined['Title'].replace(['Lady', 'the Countess','Capt', 'Col','Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')train_test_combined['Title'] = train_test_combined['Title'].replace(['Mlle', 'Ms'],'Miss')train_test_combined['Title'] = train_test_combined['Title'].replace('Mme','Mrs')title_dummy = pd.get_dummies(train_test_combined['Title']).rename(columns=lambda x: 'Title_' + str(x))train_test_combined = pd.concat([train_test_combined,title_dummy],axis=1)

5. Cabin

根据Cabin是否缺失生成一个新的变量:

train_test_combined.loc[train_test_combined['Cabin'].isnull(),'Cabin_derived'] = 'Missing'train_test_combined.loc[train_test_combined['Cabin'].notnull(),'Cabin_derived'] = 'Not Missing'cabin_dummy = pd.get_dummies(train_test_combined['Cabin_derived']).rename(columns=lambda x: 'Cabin_' + str(x))train_test_combined = pd.concat([train_test_combined,cabin_dummy],axis=1)

6. Pclass、Sex、Embarked

这三个变量只需将其dummy,不需其他处理。

#Pclass的dummy处理pclass_dummy = pd.get_dummies(train_test_combined['Pclass']).rename(columns=lambda x: 'Pclass_' + str(x))train_test_combined = pd.concat([train_test_combined,pclass_dummy],axis=1)#Sex的dummy处理sex_dummy = pd.get_dummies(train_test_combined['Sex']).rename(columns=lambda x: str(x))train_test_combined = pd.concat([train_test_combined,sex_dummy],axis=1)#Embarked的dummy处理embarked_dummy = pd.get_dummies(train_test_combined['Embarked']).rename(columns=lambda x: 'Embarked_' + str(x))train_test_combined = pd.concat([train_test_combined,embarked_dummy],axis=1)

最后我们将训练集和测试集分开,并保留有用的特征。

train = train_test_combined[:891]test = train_test_combined[891:]selected_features = ['Embarked_C','female', 'male','Embarked_Q', 'Embarked_S', 'Age_baby', 'Age_child','Age_adult', 'Age_older', 'Low_fare','Median_fare', 'High_fare','Large family', 'Single', 'Small family', 'Title_Master', 'Title_Miss','Title_Mr', 'Title_Mrs', 'Title_Rare', 'Cabin_Missing', 'Cabin_Not Missing', 'Pclass_1','Pclass_2', 'Pclass_3']x_train = train[selected_features]x_test = test[selected_features]y_train = train['Survived']

到此,数据预处理已完成,我们可以通过建模来进行预测了。

四、建模分析

1. Logistic回归

采用Grid CV法寻找最优超参数C

lr = LogisticRegression(random_state=33)param_lr = {'C':np.logspace(-4,4,9)}grid_lr = GridSearchCV(estimator = lr, param_grid = param_lr, cv = 5)grid_lr.fit(x_train,y_train)print (grid_lr.grid_scores_,'\n', 'Best param: ' ,grid_lr.best_params_, '\n', 'Best score: ', grid_lr.best_score_)

输出结果为:

[mean: 0.64646, std: 0.00833, params: {'C': 0.0001}, mean: 0.70595, std: 0.01292, params: {'C': 0.001}, mean: 0.80471, std: 0.02215, params: {'C': 0.01}, mean: 0.82043, std: 0.00361, params: {'C': 0.10000000000000001}, mean: 0.82492, std: 0.02629, params: {'C': 1.0}, mean: 0.82379, std: 0.02747, params: {'C': 10.0}, mean: 0.82492, std: 0.02813, params: {'C': 100.0}, mean: 0.82492, std: 0.02813, params: {'C': 1000.0}, mean: 0.82492, std: 0.02813, params: {'C': 10000.0}] Best param: {'C': 1.0} Best score: 0.8249158249158249

打印出每个特征的系数:

print (pd.DataFrame({"columns":list(x_train.columns), "coef":list(grid_lr.best_estimator_.coef_.T)}))

输出为:

columnscoef0Embarked_C[0.23649956536]1 female [0.892754957337]2male [-0.817790866598]3Embarked_Q [0.0560917611675]4Embarked_S [-0.217627235788]5 Age_baby [0.903880875824]6 Age_child [0.307975441906]7 Age_adult [-0.12853864715]8 Age_older [-1.00835357984]9 Low_fare [-0.343780990932]10 Median_fare [-0.102505740604]11High_fare [0.521250822275]12 Large family [-1.40958453387]13 Single [0.864627435362]14 Small family [0.619921189252]15 Title_Master[1.76928521042]16 Title_Miss [0.00766966811902]17 Title_Mr [-1.21722551405]18Title_Mrs [0.469708936608]19 Title_Rare [-0.954474210357]20Cabin_Missing [-0.35111453535]21 Cabin_Not Missing [0.426078626089]22 Pclass_1 [0.279724883526]23 Pclass_2 [0.295636224026]24 Pclass_3 [-0.500397016812]

我们再用训练好的模型对测试集进行预测,并将结果保存在本地。

lr_y_predict = lr.predict(x_test).astype('int')lr_submission = pd.DataFrame({'PassengerId':test['PassengerId'],'Survived':lr_y_predict})lr_submission.to_csv('../lr_submission.csv', index=False)

最后,我们去Kaggle上make a submission。结果为0.7799。

2. 决策树

采用Grid CV法寻找最优超参数max_depth,min_samples_split。

clf = tree.DecisionTreeClassifier(random_state=33)param_clf = {'max_depth':[3,5,10,15,20,25],'min_samples_split':[2,4,6,8,10,15,20]}grid_clf = GridSearchCV(estimator = clf, param_grid = param_clf, cv = 5)grid_clf.fit(x_train,y_train)print (grid_clf.grid_scores_,'\n', 'Best param: ' ,grid_clf.best_params_, '\n', 'Best score: ', grid_clf.best_score_)#打印出feature importancefeature_imp_sorted_clf = pd.DataFrame({'feature': list(x_train.columns),'importance': grid_clf.best_estimator_.feature_importances_}).sort_values('importance', ascending=False)print (feature_imp_sorted_clf)#输出预测结果clf_y_predict = grid_clf.predict(x_test).astype('int')clf_submission = pd.DataFrame({'PassengerId':test['PassengerId'],'Survived':clf_y_predict})clf_submission.to_csv('../clf_submission.csv', index=False)

输出为:

[mean: 0.82604, std: 0.02448, params: {'max_depth': 3, 'min_samples_split': 2}, mean: 0.82604, std: 0.02448, params: {'max_depth': 3, 'min_samples_split': 4}, mean: 0.82604, std: 0.02448, params: {'max_depth': 3, 'min_samples_split': 6}, mean: 0.82604, std: 0.02448, params: {'max_depth': 3, 'min_samples_split': 8}, mean: 0.82604, std: 0.02448, params: {'max_depth': 3, 'min_samples_split': 10}, mean: 0.82604, std: 0.02448, params: {'max_depth': 3, 'min_samples_split': 15}, mean: 0.82604, std: 0.02448, params: {'max_depth': 3, 'min_samples_split': 20}, mean: 0.83277, std: 0.02694, params: {'max_depth': 5, 'min_samples_split': 2}, mean: 0.83277, std: 0.02694, params: {'max_depth': 5, 'min_samples_split': 4}, mean: 0.83277, std: 0.02694, params: {'max_depth': 5, 'min_samples_split': 6}, mean: 0.83389, std: 0.02877, params: {'max_depth': 5, 'min_samples_split': 8}, mean: 0.83389, std: 0.02877, params: {'max_depth': 5, 'min_samples_split': 10}, mean: 0.83389, std: 0.02877, params: {'max_depth': 5, 'min_samples_split': 15}, mean: 0.83277, std: 0.02694, params: {'max_depth': 5, 'min_samples_split': 20}, mean: 0.81930, std: 0.01400, params: {'max_depth': 10, 'min_samples_split': 2}, mean: 0.81930, std: 0.01848, params: {'max_depth': 10, 'min_samples_split': 4}, mean: 0.82043, std: 0.01939, params: {'max_depth': 10, 'min_samples_split': 6}, mean: 0.82267, std: 0.02194, params: {'max_depth': 10, 'min_samples_split': 8}, mean: 0.82492, std: 0.02281, params: {'max_depth': 10, 'min_samples_split': 10}, mean: 0.82604, std: 0.02161, params: {'max_depth': 10, 'min_samples_split': 15}, mean: 0.82716, std: 0.01968, params: {'max_depth': 10, 'min_samples_split': 20}, mean: 0.81818, std: 0.01438, params: {'max_depth': 15, 'min_samples_split': 2}, mean: 0.81706, std: 0.01711, params: {'max_depth': 15, 'min_samples_split': 4}, mean: 0.81818, std: 0.01787, params: {'max_depth': 15, 'min_samples_split': 6}, mean: 0.82379, std: 0.02051, params: {'max_depth': 15, 'min_samples_split': 8}, mean: 0.82828, std: 0.02255, params: {'max_depth': 15, 'min_samples_split': 10}, mean: 0.82604, std: 0.02161, params: {'max_depth': 15, 'min_samples_split': 15}, mean: 0.82716, std: 0.01968, params: {'max_depth': 15, 'min_samples_split': 20}, mean: 0.81818, std: 0.01438, params: {'max_depth': 20, 'min_samples_split': 2}, mean: 0.81706, std: 0.01711, params: {'max_depth': 20, 'min_samples_split': 4}, mean: 0.81818, std: 0.01787, params: {'max_depth': 20, 'min_samples_split': 6}, mean: 0.82379, std: 0.02051, params: {'max_depth': 20, 'min_samples_split': 8}, mean: 0.82828, std: 0.02255, params: {'max_depth': 20, 'min_samples_split': 10}, mean: 0.82604, std: 0.02161, params: {'max_depth': 20, 'min_samples_split': 15}, mean: 0.82716, std: 0.01968, params: {'max_depth': 20, 'min_samples_split': 20}, mean: 0.81818, std: 0.01438, params: {'max_depth': 25, 'min_samples_split': 2}, mean: 0.81706, std: 0.01711, params: {'max_depth': 25, 'min_samples_split': 4}, mean: 0.81818, std: 0.01787, params: {'max_depth': 25, 'min_samples_split': 6}, mean: 0.82379, std: 0.02051, params: {'max_depth': 25, 'min_samples_split': 8}, mean: 0.82828, std: 0.02255, params: {'max_depth': 25, 'min_samples_split': 10}, mean: 0.82604, std: 0.02161, params: {'max_depth': 25, 'min_samples_split': 15}, mean: 0.82716, std: 0.01968, params: {'max_depth': 25, 'min_samples_split': 20}] Best param: {'max_depth': 5, 'min_samples_split': 8} Best score: 0.8338945005611672feature importance17 Title_Mr 0.57950212 Large family 0.13556419 Title_Rare 0.06666721 Cabin_Not Missing 0.06513324 Pclass_3 0.0458709 Low_fare 0.0415894Embarked_S 0.0208512male 0.0141377 Age_adult 0.00848023 Pclass_2 0.00774111High_fare 0.00700822 Pclass_1 0.00286813 Single 0.00152114 Small family 0.0011460Embarked_C 0.0010033Embarked_Q 0.00063318Title_Mrs 0.00028820Cabin_Missing 0.0000005 Age_baby 0.00000016 Title_Miss 0.0000006 Age_child 0.0000001 female 0.00000010 Median_fare 0.0000008 Age_older 0.00000015 Title_Master 0.000000

输出可视化决策树:

print (grid_clf.best_estimator_)clf = tree.DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=5,max_features=None, max_leaf_nodes=None,min_impurity_decrease=0.0, min_impurity_split=None,min_samples_leaf=1, min_samples_split=8,min_weight_fraction_leaf=0.0, presort=False, random_state=33,splitter='best')clf.fit(x_train,y_train)os.environ["PATH"] += os.pathsep + '/usr/local/Cellar/graphviz/2.40.1/bin/'data_feature_name = list(x_train.columns)dot_data = tree.export_graphviz(clf, out_file=None, feature_names = data_feature_name,filled=True, rounded=True,special_characters=True) graph = pydotplus.graph_from_dot_data(dot_data)graph.write_pdf("TitanicTree.pdf")print('Visible tree plot saved as pdf.')

输出为:

最后去kaggle上make a submission,准确率为0.78947。

3. Random Forest

使用Grid CV来调参,先确定n_estimators的值,再确定max_features的值,最后确定max_depth、min_samples_leaf、min_samples_split的值。

rf = RandomForestClassifier(random_state=33)param_rf = {'n_estimators':[i for i in range(10,50,5)]}#param_rf = {'n_estimators':[10,50,100,200,500,1000]}grid_rf = GridSearchCV(estimator = rf, param_grid = param_rf, cv = 5)grid_rf.fit(x_train,y_train)print (grid_rf.grid_scores_,'\n', 'Best param: ' ,grid_rf.best_params_, '\n', 'Best score: ', grid_rf.best_score_)rf = RandomForestClassifier(random_state=33, n_estimators=20)param_rf = {'max_features':[i for i in range(2,23,2)]}grid_rf = GridSearchCV(estimator = rf, param_grid = param_rf, cv = 5)grid_rf.fit(x_train,y_train)print (grid_rf.grid_scores_,'\n', 'Best param: ' ,grid_rf.best_params_, '\n', 'Best score: ', grid_rf.best_score_)rf = RandomForestClassifier(random_state=33, n_estimators=20, max_features = 18)param_rf = {'max_depth':[i for i in range(10,25,5)],'min_samples_split':[i for i in range(12,21,2)]}grid_rf = GridSearchCV(estimator = rf, param_grid = param_rf, cv = 5)grid_rf.fit(x_train,y_train)print (grid_rf.grid_scores_,'\n', 'Best param: ' ,grid_rf.best_params_, '\n', 'Best score: ', grid_rf.best_score_)rf = RandomForestClassifier(random_state=33, n_estimators=20, max_features = 18, max_depth=10)param_rf = {'min_samples_split':[i for i in range(12,25,2)],'min_samples_leaf':[i for i in range(2,21,2)]}grid_rf = GridSearchCV(estimator = rf, param_grid = param_rf, cv = 5)grid_rf.fit(x_train,y_train)print (grid_rf.grid_scores_,'\n', 'Best param: ' ,grid_rf.best_params_, '\n', 'Best score: ', grid_rf.best_score_)rf = RandomForestClassifier(random_state=33, n_estimators=20, max_features = 18, max_depth=10, min_samples_leaf = 2, min_samples_split = 22, oob_score = True)rf.fit(x_train,y_train)#print (rf.oob_score_)rf_y_predict = rf.predict(x_test).astype('int')rf_submission = pd.DataFrame({'PassengerId':test['PassengerId'],'Survived':rf_y_predict})rf_submission.to_csv('../rf_submission.csv', index=False)

最终的选择的最优参数组合为:n_estimators=20, max_features = 18, max_depth=10, min_samples_leaf = 2, min_samples_split = 22,best score为0.8439955106621774。

最后去kaggle上提交,结果为0.79425。

4. Adaboost

Adaboost需要调节的参数较少,采用Grid CV法,寻找最优n_estimators和learning rate。这两个参数需要一起调。

ada = AdaBoostClassifier(random_state=33)param_ada = {'n_estimators':[500,1000,2000,5000],'learning_rate':[0.001,0.01,0.1]}grid_ada = GridSearchCV(estimator = ada, param_grid = param_ada, cv = 5)grid_ada.fit(x_train,y_train)print (grid_ada.grid_scores_,'\n', 'Best param: ' ,grid_ada.best_params_, '\n', 'Best score: ', grid_ada.best_score_)ada_y_predict = grid_ada.predict(x_test).astype('int')ada_submission = pd.DataFrame({'PassengerId':test['PassengerId'],'Survived':ada_y_predict})ada_submission.to_csv('../ada_submission.csv', index=False)

输出为:

[mean: 0.77890, std: 0.01317, params: {'learning_rate': 0.001, 'n_estimators': 500}, mean: 0.78676, std: 0.01813, params: {'learning_rate': 0.001, 'n_estimators': 1000}, mean: 0.79125, std: 0.01352, params: {'learning_rate': 0.001, 'n_estimators': 2000}, mean: 0.81818, std: 0.01382, params: {'learning_rate': 0.001, 'n_estimators': 5000}, mean: 0.81818, std: 0.01382, params: {'learning_rate': 0.01, 'n_estimators': 500}, mean: 0.82941, std: 0.01887, params: {'learning_rate': 0.01, 'n_estimators': 1000}, mean: 0.82828, std: 0.0, params: {'learning_rate': 0.01, 'n_estimators': 2000}, mean: 0.82492, std: 0.02700, params: {'learning_rate': 0.01, 'n_estimators': 5000}, mean: 0.82492, std: 0.02700, params: {'learning_rate': 0.1, 'n_estimators': 500}, mean: 0.82155, std: 0.02737, params: {'learning_rate': 0.1, 'n_estimators': 1000}, mean: 0.82267, std: 0.02647, params: {'learning_rate': 0.1, 'n_estimators': 2000}, mean: 0.82379, std: 0.02674, params: {'learning_rate': 0.1, 'n_estimators': 5000}] Best param: {'learning_rate': 0.01, 'n_estimators': 1000} Best score: 0.8294051627384961

最后去kaggle上提交,结果为0.78947。

5. Gradient tree boosting

使用Grid CV来调参,先确定n_estimators和learning rate的值,再确定max_depth、min_samples_leaf、min_samples_split的值。

gtb = GradientBoostingClassifier(random_state=33, subsample=0.8)param_gtb = {'n_estimators':[500,1000,2000,5000],'learning_rate':[0.001,0.005,0.01,0.02]}grid_gtb = GridSearchCV(estimator = gtb, param_grid = param_gtb, cv = 5)grid_gtb.fit(x_train,y_train)print (grid_gtb.grid_scores_,'\n', 'Best param: ' ,grid_gtb.best_params_, '\n', 'Best score: ', grid_gtb.best_score_)gtb = GradientBoostingClassifier(random_state=33, subsample=0.8, n_estimators=1000, learning_rate=0.001)param_gtb = {'max_depth':[i for i in range(10,25,5)],'min_samples_split':[i for i in range(12,21,2)]}grid_gtb = GridSearchCV(estimator = gtb, param_grid = param_gtb, cv = 5)grid_gtb.fit(x_train,y_train)print (grid_gtb.grid_scores_,'\n', 'Best param: ' ,grid_gtb.best_params_, '\n', 'Best score: ', grid_gtb.best_score_)gtb = GradientBoostingClassifier(random_state=33, subsample=0.8, n_estimators=1000, learning_rate=0.001, max_depth=10)param_gtb = {'min_samples_split':[i for i in range(10,18,2)],'min_samples_leaf':[i for i in range(14,19,2)]}grid_gtb = GridSearchCV(estimator = gtb, param_grid = param_gtb, cv = 5)grid_gtb.fit(x_train,y_train)print (grid_gtb.grid_scores_,'\n', 'Best param: ' ,grid_gtb.best_params_, '\n', 'Best score: ', grid_gtb.best_score_)gtb = GradientBoostingClassifier(random_state=33, subsample=0.8, n_estimators=1000, learning_rate=0.001, max_depth=10, min_samples_split=10 , min_samples_leaf=16)gtb.fit(x_train,y_train)gtb_y_predict = gtb.predict(x_test).astype('int')print (cross_val_score(gtb,x_train,y_train,cv=5).mean())gtb_submission = pd.DataFrame({'PassengerId':test['PassengerId'],'Survived':gtb_y_predict})gtb_submission.to_csv('../gtb_submission.csv', index=False)

最终的选择的最优参数组合为:n_estimators=1000, learning_rate=0.001, max_depth=10, min_samples_split=10 , min_samples_leaf=16,best score为0.8417508417508418。

最后去kaggle上提交,结果为0.80382。

五、另一种预测方法

我们知道大部分女性都幸存下来了,大部分男性都没能存活下来。那怎么去判断是哪部分女性没有幸存下来,而哪部分男性幸存下来了呢。有一个合理的假设为:如果母亲活下来了,那么她的孩子也会存活下来。如果孩子死了,那么母亲也会死亡。因为训练集和测试集中的家族是有一部分重叠的,所以我们可以根据训练集家族孩子和女性的存活情况,来判断测试集中同一家族中孩子和女性的存活情况。其规则为:对于测试集中的小男孩,如果他家族中的女性和小男孩都活下来了,那么就预测该小男孩也会活下来。对于测试集中的女性,如果她家族中的女性和小男孩都死亡了,那么就预测该女性也会死亡。对剩下的数据,则再根据性别来判断,女性存活,男性死亡。

#读取数据train = pd.read_csv('/Users/jingxuan.ljx/Documents/machine learning/kaggle/Titanic/train.csv')test = pd.read_csv('/Users/jingxuan.ljx/Documents/machine learning/kaggle/Titanic/test.csv')#Surnametrain['Surname'] = [train.iloc[i]['Name'].split(',')[0] + str(train.iloc[i]['Pclass']) for i in range(len(train))] #同一个家族的人船舱等级应该一致test['Surname'] = [test.iloc[i]['Name'].split(',')[0] + str(test.iloc[i]['Pclass']) for i in range(len(test))] #同一个家族的人船舱等级应该一致train['Family_size'] = train['Parch'] + train['SibSp']test['Family_size'] = test['Parch'] + test['SibSp']train['Title'] = train['Name'].map(lambda x: pile(", (.*?)\.").findall(x)[0])test['Title'] = test['Name'].map(lambda x: pile(", (.*?)\.").findall(x)[0])boy = (train.Name.str.contains('Master')) | ((train.Sex=='male') & (train.Age<13))female = train.Sex=='female'boy_or_female = boy | femaleboy_femSurvival = train[boy_or_female].groupby('Surname')['Survived'].mean().to_frame()boy_femSurvived = list(boy_femSurvival[boy_femSurvival['Survived']==1].index)boy_femDied = list(boy_femSurvival[boy_femSurvival['Survived']==0].index)def boy_female_survival(input_dataset):for i in range(len(input_dataset)):if input_dataset.iloc[i]['Surname'] in boy_femSurvived and input_dataset.iloc[i]['Family_size']>0 and (input_dataset.iloc[i]['Sex']=='female' or (input_dataset.iloc[i]['Title']=='Master' or (input_dataset.iloc[i]['Sex']=='male' and input_dataset.iloc[i]['Age']<13))):input_dataset.loc[i,'Survived'] = 1elif input_dataset.iloc[i]['Surname'] in boy_femDied and input_dataset.iloc[i]['Family_size']>0:input_dataset.loc[i,'Survived'] = 0boy_female_survival(test)#print (test[test['Survived'] == 1][['Name', 'Age', 'Sex', 'Pclass','Family_size']])test_out1 = test[test['Survived'].notnull()]test1 = test[test['Survived'].isnull()]test1.index = range(0,len(test1))#对剩下的数据根据性别来判断幸存与否def gender_survival(sex):if sex == 'female':return 1else:return 0test1['Survived'] = test1['Sex'].map(gender_survival)#合并两次预测的数据test_out = pd.concat([test_out1, test1], axis=0).sort_values(by = 'PassengerId')test_submission = test_out[['PassengerId','Survived']]test_submission['Survived'] = test_submission['Survived'].astype('int')test_submission.to_csv('../test_submission.csv', index=False)

最后去kaggle上提交,结果为0.81339,比建模的效果要好。

参考文献:

1.How to score over 82% Titanic

2.Kaggle_Titanic生存预测 -- 详细流程吐血梳理

如果觉得《Kaggle - Titanic 生存预测》对你有帮助,请点赞、收藏,并留下你的观点哦!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。