许多疾病,包括癌症,被认为都有一个共同的促成因素。离子通道是存在于动物和植物中的成孔蛋白。它们对学习和记忆进行编码,帮助抵抗感染,发出疼痛信号,并刺激肌肉收缩。如果科学家能够更好地研究离子通道(在机器学习的帮助下可能是可能的),它可能会产生深远的影响。
下面介绍一种单特征模型用于解决信号与离子通道的相关性问题。
加载库和数据
python
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import os
from sklearn.metrics import f1_score
import graphviz
from sklearn import tree
ini
test = pd.read_csv('../input/liverpool-ion-switching/test.csv')
train = pd.read_csv('../input/liverpool-ion-switching/train.csv')
train.head()
数据描述
及时记录训练数据。在每万分之一秒,信号的强度被记录下来,离子通道打开的数量被记录下来。我们的任务是建立一个模型,以预测每个时间步从信号中开放通道的数量。此外,我们被告知数据是以50秒为批次记录的。因此,每500,000行是一个批次。训练数据包含10个批次,测试数据包含4个批次。让我们一起显示每个训练批次的开放通道数量和信号强度。
scss
plt.figure(figsize=(20,5)); res = 1000
plt.plot(range(0,train.shape[0],res),train.signal[0::res])
for i in range(11): plt.plot([i*500000,i*500000],[-5,12.5],'r')
for j in range(10): plt.text(j*500000+200000,10,str(j+1),size=20)
plt.xlabel('Row',size=16); plt.ylabel('Signal',size=16);
plt.title('Training Data Signal - 10 batches',size=20)
plt.show()
scss
plt.figure(figsize=(20,5)); res = 1000
plt.plot(range(0,train.shape[0],res),train.open_channels[0::res])
for i in range(11): plt.plot([i*500000,i*500000],[-5,12.5],'r')
for j in range(10): plt.text(j*500000+200000,10,str(j+1),size=20)
plt.xlabel('Row',size=16); plt.ylabel('Channels Open',size=16);
plt.title('Training Data Open Channels - 10 batches',size=20)
plt.show()
反射
从上面的图中,看起来他们使用了5个不同的合成模型。一个型号低概率最多打开1个通道(批次1、2),一个型号大概率最多打开1个通道(批次3、7),一个型号最多打开3个通道(批次4、8),一个型号最多打开5个通道(批次6、9),一个型号最多打开10个通道(批次7、10)。此外,漂移被添加到批次7、8、9、10。第二批的开始。
根据本文,对数据进行了合成。此外,还加入了"电生理"噪声和漂移。漂移是一种信号偏差,导致信号不再是水平线,就像上面的批次2、7、8、9、10。
数据描述和数据集构建。采用Gillespie 43方法对已发布的单通道模型进行了离子通道驻留时间的模拟。假设信道遵循随机马尔可夫过程,通过随机抽样计算每个状态的寿命概率分布来模拟从一个状态到下一个状态的转换。通过将信号通过膜片钳放大器,并通过轴突电子"模型细胞"将其记录回CED的信号软件文件,从而在这些事件中添加真正的"电生理"噪声。在一些数据集上,使用Matlab将额外的漂移应用于最终数据。两种不同的随机门控模型(称为M1和M2)被用于生成半合成离子通道数据。M1是参考文献41中的低开启概率模型(图3a, b),通常不超过一个离子通道同时开启。模型M2来自参考文献。42,44,并且具有更高的打开概率(图3c, d),因此多达5个通道同时打开,并且很少有零通道打开的实例。
信号通道和开放通道之间的相关性
让我们仔细观察随机间隔的信号和开放通道,以观察它们之间的关系。我们注意到它们是高度相关的,并一起上下移动。因此,我们可能可以从一个特征信号预测开放通道。唯一的麻烦是添加了合成漂移。所以我们要删除它。
scss
for k in range(10):
a = int( np.random.uniform(0,train.shape[0]-50000) )
b=a+5000; res=10
print('#'*25)
print('### Random %i to %i'%(a,b))
print('#'*25)
plt.figure(figsize=(20,5))
plt.plot(range(a,b,res),train.signal[a:b][0::res])
plt.plot(range(a,b,res),train.open_channels[a:b][0::res])
plt.show()
shell
#########################
### Random 288832 to 293832
#########################
css
Random 4404084 to 4409084
shell
#########################
### Random 4365017 to 4370017
#########################
shell
#########################
### Random 4302538 to 4307538
#########################
shell
#########################
### Random 947692 to 952692
#########################
shell
#########################
### Random 2260612 to 2265612
#########################
shell
#########################
### Random 4054051 to 4059051
#########################
shell
#########################
### Random 2494381 to 2499381
#########################
shell
#########################
### Random 125565 to 130565
#########################
shell
#########################
### Random 2312751 to 2317751
#########################
测试数据
让我们显示测试数据信号
scss
plt.figure(figsize=(20,5))
res = 1000; let = ['A','B','C','D','E','F','G','H','I','J']
plt.plot(range(0,test.shape[0],res),test.signal[0::res])
for i in range(5): plt.plot([i*500000,i*500000],[-5,12.5],'r')
for j in range(21): plt.plot([j*100000,j*100000],[-5,12.5],'r:')
for k in range(4): plt.text(k*500000+200000,10,str(k+1),size=20)
for k in range(10): plt.text(k*100000+40000,7,let[k],size=16)
plt.xlabel('Row',size=16); plt.ylabel('Channels Open',size=16);
plt.title('Test Data Signal - 4 batches - 10 subsamples',size=20)
plt.show()
反射
从这张图中,我们可以找到运行中的5个模型。我们可以识别出增加的漂移。批次1似乎是5个子样本,其中A、B、C、D、E分别由模型1s、3、5、1s、1f创建。模型1是最大打开1个通道的模型。模型1f是最大1通道开放的高概率模型。模型3、5、10分别为最大通道数为3、5、10的模型。我们在A, B, E, G, H, i子样品中观察到倾斜漂移。我们在第3批中观察到抛物线draft。
消除数据漂移
这是一个演示,显示倾斜漂移去除。我们也可以删除抛物线漂移批次7,8,9,10如果我们想。下面我们将只训练我们的模型的批次为1、3、4、5、6。但是在去除训练漂移之后,如果我们愿意,我们可以将第2、7、8、9、10批次的数据包含在我们的训练中。
ini
train2 = train.copy()
a=500000; b=600000 # CLEAN TRAIN BATCH 2
train2.loc[train.index[a:b],'signal'] = train2.signal[a:b].values - 3*(train2.time.values[a:b] - 50)/10.
ini
batch=2; a=500000*(batch-1); b=500000*batch; res=50
plt.figure(figsize=(20,5))
plt.plot(range(a,b,res),train.signal[a:b][0::res])
plt.title('Training Batch 2 with Slant Drift',size=16)
plt.figure(figsize=(20,5))
plt.plot(range(a,b,res),train2.signal[a:b][0::res])
plt.title('Training Batch 2 without Slant Drift',size=16)
plt.show()
ini
def f(x,low,high,mid): return -((-low+high)/625)*(x-mid)**2+high -low
# CLEAN TRAIN BATCH 7
batch = 7; a = 500000*(batch-1); b = 500000*batch
train2.loc[train2.index[a:b],'signal'] = train.signal.values[a:b] - f(train.time[a:b].values,-1.817,3.186,325)
# CLEAN TRAIN BATCH 8
batch = 8; a = 500000*(batch-1); b = 500000*batch
train2.loc[train2.index[a:b],'signal'] = train.signal.values[a:b] - f(train.time[a:b].values,-0.094,4.936,375)
# CLEAN TRAIN BATCH 9
batch = 9; a = 500000*(batch-1); b = 500000*batch
train2.loc[train2.index[a:b],'signal'] = train.signal.values[a:b] - f(train.time[a:b].values,1.715,6.689,425)
# CLEAN TRAIN BATCH 10
batch = 10; a = 500000*(batch-1); b = 500000*batch
train2.loc[train2.index[a:b],'signal'] = train.signal.values[a:b] - f(train.time[a:b].values,3.361,8.45,475)
ini
plt.figure(figsize=(20,5))
plt.plot(train.time[::1000],train.signal[::1000])
plt.title('Training Batches 7-10 with Parabolic Drift',size=16)
plt.figure(figsize=(20,5))
plt.plot(train2.time[::1000],train2.signal[::1000])
plt.title('Training Batches 7-10 without Parabolic Drift',size=16)
plt.show()
制作5个简单模型
1个慢速开放通道
ini
batch = 1; a = 500000*(batch-1); b = 500000*batch
batch = 2; c = 500000*(batch-1); d = 500000*batch
X_train = np.concatenate([train2.signal.values[a:b],train2.signal.values[c:d]]).reshape((-1,1))
y_train = np.concatenate([train2.open_channels.values[a:b],train2.open_channels.values[c:d]]).reshape((-1,1))
clf1s = tree.DecisionTreeClassifier(max_depth=1)
clf1s = clf1s.fit(X_train,y_train)
print('Training model 1s channel')
preds = clf1s.predict(X_train)
print('has f1 validation score =',f1_score(y_train,preds,average='macro'))
tree_graph = tree.export_graphviz(clf1s, out_file=None, max_depth = 10,
impurity = False, feature_names = ['signal'], class_names = ['0', '1'],
rounded = True, filled= True )
graphviz.Source(tree_graph)
1个快速开放通道
ini
batch = 3; a = 500000*(batch-1); b = 500000*batch
batch = 7; c = 500000*(batch-1); d = 500000*batch
X_train = np.concatenate([train2.signal.values[a:b],train2.signal.values[c:d]]).reshape((-1,1))
y_train = np.concatenate([train2.open_channels.values[a:b],train2.open_channels.values[c:d]]).reshape((-1,1))
clf1f = tree.DecisionTreeClassifier(max_depth=1)
clf1f = clf1f.fit(X_train, y_train)
print('Training model 1f channel')
preds = clf1f.predict(X_train)
print('has f1 validation score =',f1_score(y_train,preds,average='macro'))
tree_graph = tree.export_graphviz(clf1f, out_file=None, max_depth = 10,
impurity = False, feature_names = ['signal'], class_names = ['0', '1'],
rounded = True, filled= True )
graphviz.Source(tree_graph)
3开放通道
ini
batch = 4; a = 500000*(batch-1); b = 500000*batch
batch = 8; c = 500000*(batch-1); d = 500000*batch
X_train = np.concatenate([train2.signal.values[a:b],train2.signal.values[c:d]]).reshape((-1,1))
y_train = np.concatenate([train2.open_channels.values[a:b],train2.open_channels.values[c:d]]).reshape((-1,1))
clf3 = tree.DecisionTreeClassifier(max_leaf_nodes=4)
clf3 = clf3.fit(X_train,y_train)
print('Training model 3 channel')
preds = clf3.predict(X_train)
print('has f1 validation score =',f1_score(y_train,preds,average='macro'))
tree_graph = tree.export_graphviz(clf3, out_file=None, max_depth = 10,
impurity = False, feature_names = ['signal'], class_names = ['0', '1','2','3'],
rounded = True, filled= True )
graphviz.Source(tree_graph)
5开放通道
ini
batch = 6; a = 500000*(batch-1); b = 500000*batch
batch = 9; c = 500000*(batch-1); d = 500000*batch
X_train = np.concatenate([train2.signal.values[a:b],train2.signal.values[c:d]]).reshape((-1,1))
y_train = np.concatenate([train2.open_channels.values[a:b],train2.open_channels.values[c:d]]).reshape((-1,1))
clf5 = tree.DecisionTreeClassifier(max_leaf_nodes=6)
clf5 = clf5.fit(X_train, y_train)
print('Trained model 5 channel')
preds = clf5.predict(X_train)
print('has f1 validation score =',f1_score(y_train,preds,average='macro'))
tree_graph = tree.export_graphviz(clf5, out_file=None, max_depth = 10,
impurity = False, feature_names = ['signal'], class_names = ['0', '1','2','3','4','5'],
rounded = True, filled= True )
graphviz.Source(tree_graph)
10个开放通道
ini
batch = 5; a = 500000*(batch-1); b = 500000*batch
batch = 10; c = 500000*(batch-1); d = 500000*batch
X_train = np.concatenate([train2.signal.values[a:b],train2.signal.values[c:d]]).reshape((-1,1))
y_train = np.concatenate([train2.open_channels.values[a:b],train2.open_channels.values[c:d]]).reshape((-1,1))
clf10 = tree.DecisionTreeClassifier(max_leaf_nodes=8)
clf10 = clf10.fit(X_train, y_train)
print('Trained model 10 channel')
preds = clf10.predict(X_train)
print('has f1 validation score =',f1_score(y_train,preds,average='macro'))
tree_graph = tree.export_graphviz(clf10, out_file=None, max_depth = 10,
impurity = False, feature_names = ['signal'], class_names = [str(x) for x in range(11)],
rounded = True, filled= True )
graphviz.Source(tree_graph)
分析测试数据偏移
训练数据漂移
在下面的图中,只要不是水平线,我们就会看到漂移。我们在第2、7、8、9、10批次中看到漂移。
scss
# 原始训练数据
plt.figure(figsize=(20,5))
r = train.signal.rolling(30000).mean()
plt.plot(train.time.values,r)
for i in range(11): plt.plot([i*50,i*50],[-3,8],'r:')
for j in range(10): plt.text(j*50+20,6,str(j+1),size=20)
plt.title('Training Signal Rolling Mean. Has Drift wherever plot is not horizontal line',size=16)
plt.show()
# 训练无漂移数据
plt.figure(figsize=(20,5))
r = train2.signal.rolling(30000).mean()
plt.plot(train2.time.values,r)
for i in range(11): plt.plot([i*50,i*50],[-3,8],'r:')
for j in range(10): plt.text(j*50+20,6,str(j+1),size=20)
plt.title('Training Signal Rolling Mean without Drift',size=16)
plt.show()
测试数据漂移
我们在测试子样本A, B, E, G, H, I和测试批次3中观察到漂移。
scss
plt.figure(figsize=(20,5))
let = ['A','B','C','D','E','F','G','H','I','J']
r = test.signal.rolling(30000).mean()
plt.plot(test.time.values,r)
for i in range(21): plt.plot([500+i*10,500+i*10],[-3,6],'r:')
for i in range(5): plt.plot([500+i*50,500+i*50],[-3,6],'r')
for k in range(4): plt.text(525+k*50,5.5,str(k+1),size=20)
for k in range(10): plt.text(505+k*10,4,let[k],size=16)
plt.title('Test Signal Rolling Mean. Has Drift wherever plot is not horizontal line',size=16)
plt.show()
删除测试数据偏差
ini
est2 = test.copy()
ini
# 清除第一批漂移
start=500
a = 0; b = 100000
test2.loc[test2.index[a:b],'signal'] = test2.signal.values[a:b] - 3*(test2.time.values[a:b]-start)/10.
start=510
a = 100000; b = 200000
test2.loc[test2.index[a:b],'signal'] = test2.signal.values[a:b] - 3*(test2.time.values[a:b]-start)/10.
start=540
a = 400000; b = 500000
test2.loc[test2.index[a:b],'signal'] = test2.signal.values[a:b] - 3*(test2.time.values[a:b]-start)/10.
ini
# 清除第二批漂移
start=560
a = 600000; b = 700000
test2.loc[test2.index[a:b],'signal'] = test2.signal.values[a:b] - 3*(test2.time.values[a:b]-start)/10.
start=570
a = 700000; b = 800000
test2.loc[test2.index[a:b],'signal'] = test2.signal.values[a:b] - 3*(test2.time.values[a:b]-start)/10.
start=580
a = 800000; b = 900000
test2.loc[test2.index[a:b],'signal'] = test2.signal.values[a:b] - 3*(test2.time.values[a:b]-start)/10.
ini
# 清除第三批漂移
def f(x):
return -(0.00788)*(x-625)**2+2.345 +2.58
a = 1000000; b = 1500000
test2.loc[test2.index[a:b],'signal'] = test2.signal.values[a:b] - f(test2.time[a:b].values)
scss
plt.figure(figsize=(20,5))
res = 1000
plt.plot(range(0,test2.shape[0],res),test2.signal[0::res])
for i in range(5): plt.plot([i*500000,i*500000],[-5,12.5],'r')
for i in range(21): plt.plot([i*100000,i*100000],[-5,12.5],'r:')
for k in range(4): plt.text(k*500000+250000,10,str(k+1),size=20)
for k in range(10): plt.text(k*100000+40000,7.5,let[k],size=16)
plt.title('Test Signal without Drift',size=16)
plt.show()
plt.figure(figsize=(20,5))
r = test2.signal.rolling(30000).mean()
plt.plot(test2.time.values,r)
for i in range(21): plt.plot([500+i*10,500+i*10],[-2,6],'r:')
for i in range(5): plt.plot([500+i*50,500+i*50],[-2,6],'r')
for k in range(4): plt.text(525+k*50,5.5,str(k+1),size=20)
for k in range(10): plt.text(505+k*10,4,let[k],size=16)
plt.title('Test Signal Rolling Mean without Drift',size=16)
plt.show()
预测测试
ini
sub = pd.read_csv('../input/liverpool-ion-switching/sample_submission.csv')
a = 0 # SUBSAMPLE A, Model 1s
sub.iloc[100000*a:100000*(a+1),1] = clf1s.predict(test2.signal.values[100000*a:100000*(a+1)].reshape((-1,1)))
a = 1 # SUBSAMPLE B, Model 3
sub.iloc[100000*a:100000*(a+1),1] = clf3.predict(test2.signal.values[100000*a:100000*(a+1)].reshape((-1,1)))
a = 2 # SUBSAMPLE C, Model 5
sub.iloc[100000*a:100000*(a+1),1] = clf5.predict(test2.signal.values[100000*a:100000*(a+1)].reshape((-1,1)))
a = 3 # SUBSAMPLE D, Model 1s
sub.iloc[100000*a:100000*(a+1),1] = clf1s.predict(test2.signal.values[100000*a:100000*(a+1)].reshape((-1,1)))
a = 4 # SUBSAMPLE E, Model 1f
sub.iloc[100000*a:100000*(a+1),1] = clf1f.predict(test2.signal.values[100000*a:100000*(a+1)].reshape((-1,1)))
a = 5 # SUBSAMPLE F, Model 10
sub.iloc[100000*a:100000*(a+1),1] = clf10.predict(test2.signal.values[100000*a:100000*(a+1)].reshape((-1,1)))
a = 6 # SUBSAMPLE G, Model 5
sub.iloc[100000*a:100000*(a+1),1] = clf5.predict(test2.signal.values[100000*a:100000*(a+1)].reshape((-1,1)))
a = 7 # SUBSAMPLE H, Model 10
sub.iloc[100000*a:100000*(a+1),1] = clf10.predict(test2.signal.values[100000*a:100000*(a+1)].reshape((-1,1)))
a = 8 # SUBSAMPLE I, Model 1s
sub.iloc[100000*a:100000*(a+1),1] = clf1s.predict(test2.signal.values[100000*a:100000*(a+1)].reshape((-1,1)))
a = 9 # SUBSAMPLE J, Model 3
sub.iloc[100000*a:100000*(a+1),1] = clf3.predict(test2.signal.values[100000*a:100000*(a+1)].reshape((-1,1)))
# BATCHES 3 AND 4, Model 1s
sub.iloc[1000000:2000000,1] = clf1s.predict(test2.signal.values[1000000:2000000].reshape((-1,1)))
显示测试预测
scss
plt.figure(figsize=(20,5))
res = 1000
plt.plot(range(0,test.shape[0],res),sub.open_channels[0::res])
for i in range(5): plt.plot([i*500000,i*500000],[-5,12.5],'r')
for i in range(21): plt.plot([i*100000,i*100000],[-5,12.5],'r:')
for k in range(4): plt.text(k*500000+250000,10,str(k+1),size=20)
for k in range(10): plt.text(k*100000+40000,7.5,let[k],size=16)
plt.title('Test Data Predictions',size=16)
plt.show()
ini
sub.to_csv('submission.csv',index=False,float_format='%.4f')