TP3-M1ProML-Correction

Posted on Tue 14 May 2019 in posts

Classifieur Lineaire : le perceptron

Le but de ce TP est de se familiariser avec les réseaux de neurones. Dans un premier temps, nous allons nous intéressés au modèle du perceptron. Le Perceptron permet de classifier des jeu de données à condition que celui-ci soit séparable linéairement. Ce modèle est particulièrement important puisqu'il est une des briques de base des réseaux de neurones profonds.

Jeu de données artificiel

In [3]:
import numpy as np
import matplotlib.pyplot as plt
In [5]:
Nd = 100
# droite aléatoire w0 + x_1 * w_1 + x_2 w_2
w = np.random.random(size=(3))*2-1
xr = np.arange(-5,5)
# eq de la séparatrice
yr = -w[0]/w[2] - w[1]/w[2]*xr

# nuage de points
Cloud = np.random.random(size=(2,Nd))*8-4
Marge = np.matmul(Cloud.T,w[1:]) + w[0]
CPos = np.where(Marge > 1)
CNeg = np.where(Marge < -1)
plt.plot(xr,yr)
plt.scatter(Cloud[0,CPos],Cloud[1,CPos])
plt.scatter(Cloud[0,CNeg],Cloud[1,CNeg])
Out[5]:
<matplotlib.collections.PathCollection at 0x7f610290acf8>

On va créer à la fois un jeu de données d'apprentissage, et un jeu de données test

In [6]:
labels = np.zeros(Cloud.shape[1])
labels[np.where(Marge>1)] = 1

CAll = np.where(np.abs(Marge) > 1)
X = Cloud[:,CAll[0]]
y = labels[CAll[0]]

Nt = 50
Xt = np.random.random(size=(2,Nt))*8-4
yt = np.zeros(Xt.shape[1])
Marge_t = np.matmul(Xt.T,w[1:]) + w[0]
yt[np.where(Marge_t>0)] = 1

CPos_t = np.where(Marge_t > 0)
CNeg_t = np.where(Marge_t < 0)

plt.plot(xr,yr)
plt.scatter(X[0,:],X[1,:])
plt.scatter(Xt[0,CPos_t],Xt[1,CPos_t])
plt.scatter(Xt[0,CNeg_t],Xt[1,CNeg_t])
Out[6]:
<matplotlib.collections.PathCollection at 0x7f6102882438>

Le Perceptron

En utilisant la biblothèque ci-dessous, ajustez modèle sur le jeu de données. En particulier vous regarderez

  • l'évolution de la séparatrice après chaque itération
  • la courbe score (accuracy)
  • la courbe de la loss

Pour chacune les deux dernières courbes, vous rajouterez les valeurs obtenues sur l'ensemble test.

In [7]:
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import log_loss
clf = MLPClassifier(solver='sgd', alpha=0, hidden_layer_sizes=(), random_state=5, warm_start=True, activation='logistic', max_iter=1, learning_rate_init=0.002)

TMAX = 200
score_train = np.zeros(TMAX)
score_test = np.zeros(TMAX)
loss_train = np.zeros(TMAX)
loss_test = np.zeros(TMAX)

f,ax = plt.subplots(1,3,figsize=(15,5))


for t in range(TMAX):
    clf.fit(X.T,y)
    yr_clf = -clf.intercepts_[0][0]/clf.coefs_[0][1][0] - clf.coefs_[0][0][0]/clf.coefs_[0][1][0]*xr
    score_train[t] = clf.score(X.T,y)
    score_test[t] = clf.score(Xt.T,yt)
    loss_train[t] = clf.loss_
    loss_test[t] = log_loss(yt,clf.predict_proba(Xt.T))
    ax[2].plot(xr,yr_clf)
    

ax[2].set_ylim([-5,5])
ax[2].plot(xr,yr)
ax[2].scatter(X[0,:],X[1,:])
    
ax[0].plot(score_train)
ax[0].plot(score_test)
ax[1].plot(loss_train)
ax[1].plot(loss_test)
/home/aurele/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/multilayer_perceptron.py:562: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (1) reached and the optimization hasn't converged yet.
  % self.max_iter, ConvergenceWarning)
Out[7]:
[<matplotlib.lines.Line2D at 0x7f61000ad208>]
In [20]:
# display predicted scores by the model as a contour plot
from matplotlib.colors import LogNorm

x = np.linspace(-5., 5.)
y = np.linspace(-4.5, 4.5)
X1, Y1 = np.meshgrid(x, y)
XX = np.array([X1.ravel(), Y1.ravel()]).T
Z = clf.predict_proba(XX)[:,0]
Z = Z.reshape(X1.shape)

CS = plt.contour(X1, Y1, Z,
                 levels=np.linspace(0, 1, 100))
CB = plt.colorbar(CS, shrink=1.0, extend='both')

plt.scatter(X[0, :], X[1, :], 10, color='red')
plt.ylim(-5,5)
plt.plot(xr,yr_clf)
Out[20]:
[<matplotlib.lines.Line2D at 0x7f60fe045a90>]

Autre exemple avec MNIST

Reprendre le perceptron ci-dessus avec les données MNIST. Comment changer le MLP pour classer toutes les catégories ?

In [2]:
import pickle
import gzip
import numpy as np
import matplotlib.pyplot as plt
import numpy as np

f = gzip.open('../mnist.pkl.gz', 'rb')
u = pickle._Unpickler(f)
u.encoding = 'latin1'
p = u.load()
train_set, valid_set, test_set = p
In [8]:
clf_10 = MLPClassifier(solver='sgd', alpha=0, hidden_layer_sizes=(), random_state=1, warm_start=True, activation='relu', max_iter=1, learning_rate_init=0.01)

TMAX = 100
score_train = np.zeros(TMAX)
score_test = np.zeros(TMAX)
loss_train = np.zeros(TMAX)
loss_test = np.zeros(TMAX)


for t in range(TMAX):
    clf_10.fit(train_set[0][:10000,:],train_set[1][:10000])    
    score_train[t] = clf_10.score(train_set[0][:10000,:],train_set[1][:10000])
    score_test[t] = clf_10.score(test_set[0],test_set[1])
    loss_train[t] = clf_10.loss_
    loss_test[t] = log_loss(test_set[1],clf_10.predict_proba(test_set[0]))
/home/aurele/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/multilayer_perceptron.py:562: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (1) reached and the optimization hasn't converged yet.
  % self.max_iter, ConvergenceWarning)
In [9]:
f,ax = plt.subplots(1,2,figsize=(15,5))
ax[0].plot(score_train)
ax[0].plot(score_test)
ax[1].plot(loss_train)
ax[1].plot(loss_test)
Out[9]:
[<matplotlib.lines.Line2D at 0x7fe2a570d780>]
In [10]:
from sklearn.metrics import confusion_matrix

CM = confusion_matrix(test_set[1], clf_10.predict(test_set[0]))
In [11]:
plt.imshow(CM)
plt.colorbar()
Out[11]:
<matplotlib.colorbar.Colorbar at 0x7fe2a52415c0>
In [ ]:
 

Limitation du perceptron

Soit le jeu de données suivant

In [21]:
x_xor = np.zeros((1000,2))
for i in np.arange(500):
    c = np.array([0,0])
    if np.random.randint(2)<0.5:
        c = np.array([1,1])
    x_xor[i,:] = c + (np.random.uniform(0,1,size=(2))*0.2 - 0.1)
for i in np.arange(500):
    c = np.array([0,1])
    if np.random.randint(2)<0.5:
        c = np.array([1,0])
    x_xor[i+500,:] = c + (np.random.uniform(0,1,size=(2))*0.2 - 0.1)

y_xor = np.zeros(1000)
y_xor[0:500] = 0
y_xor[500:]  = 1
In [22]:
plt.scatter(x_xor[0:500,0],x_xor[0:500,1])
plt.scatter(x_xor[500:,0],x_xor[500:,1])
plt.show()

Que se passe-t-il lorsqu'on utilise un perceptron ?

In [27]:
clf_XOR = MLPClassifier(solver='sgd', alpha=0, hidden_layer_sizes=(), random_state=4, warm_start=True, activation='logistic', max_iter=1, learning_rate_init=0.01)

TMAX = 500
xr = np.arange(-1,2.05)
plt.ylim([-0.3,1.3])
plt.xlim([-0.3,1.3])

score_train = np.zeros(TMAX)

for t in range(TMAX):
    clf_XOR.fit(x_xor,y_xor)
    score_train[t] = clf_XOR.score(x_xor,y_xor)
    yr_clf = -clf_XOR.intercepts_[0][0]/clf_XOR.coefs_[0][1][0] - clf_XOR.coefs_[0][0][0]/clf_XOR.coefs_[0][1][0]*xr
    plt.plot(xr,yr_clf)
    

plt.scatter(x_xor[0:500,0],x_xor[0:500,1])
plt.scatter(x_xor[500:,0],x_xor[500:,1])
/home/aurele/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/multilayer_perceptron.py:562: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (1) reached and the optimization hasn't converged yet.
  % self.max_iter, ConvergenceWarning)
Out[27]:
<matplotlib.collections.PathCollection at 0x7fe2967160f0>
In [28]:
plt.plot(score_train)
Out[28]:
[<matplotlib.lines.Line2D at 0x7fe29666da58>]
In [25]:
plt.plot(xr,yr_clf)
plt.scatter(x_xor[0:500,0],x_xor[0:500,1])
plt.scatter(x_xor[500:,0],x_xor[500:,1])
Out[25]:
<matplotlib.collections.PathCollection at 0x7fe2a41aebe0>

On rajouter une couche cachée avec deux neurones!

In [34]:
clf_XOR_ML = MLPClassifier(solver='sgd', alpha=0, hidden_layer_sizes=(2,), random_state=4, warm_start=True, activation='logistic', max_iter=1, learning_rate_init=0.01)

TMAX = 1000
plt.ylim([-0.3,1.3])
plt.xlim([-0.3,1.3])
score_train = np.zeros(TMAX)

for t in range(TMAX):
    clf_XOR_ML.fit(x_xor,y_xor)
    score_train[t] = clf_XOR_ML.score(x_xor,y_xor)
    

plt.scatter(x_xor[0:500,0],x_xor[0:500,1])
plt.scatter(x_xor[500:,0],x_xor[500:,1])
/home/aurele/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/multilayer_perceptron.py:562: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (1) reached and the optimization hasn't converged yet.
  % self.max_iter, ConvergenceWarning)
Out[34]:
<matplotlib.collections.PathCollection at 0x7fe296058748>
In [35]:
plt.plot(score_train)
Out[35]:
[<matplotlib.lines.Line2D at 0x7fe295fbb8d0>]
In [36]:
# display predicted scores by the model as a contour plot
from matplotlib.colors import LogNorm

x = np.linspace(-0.5, 1.5)
y = np.linspace(-0.5, 1.5)
X1, Y1 = np.meshgrid(x, y)
XX = np.array([X1.ravel(), Y1.ravel()]).T
Z = clf_XOR_ML.predict_proba(XX)[:,0]
Z = Z.reshape(X1.shape)

CS = plt.contour(X1, Y1, Z,
                 levels=np.linspace(0, 1, 100))
CB = plt.colorbar(CS, shrink=1.0, extend='both')


# plt.scatter(X[0, :], X[1, :], 10, color='red')

plt.scatter(x_xor[0:500,0],x_xor[0:500,1])
plt.scatter(x_xor[500:,0],x_xor[500:,1])
Out[36]:
<matplotlib.collections.PathCollection at 0x7fe295e566d8>
In [41]:
x = np.linspace(-0.5, 1.5)
y = np.linspace(-0.5, 1.5)
X1, Y1 = np.meshgrid(x, y)
XX = np.array([X1.ravel(), Y1.ravel()]).T
Z = clf_XOR_ML.predict_proba(XX)[:,0]
Z = Z.reshape(X1.shape)

plt.imshow(Z)
Out[41]:
<matplotlib.image.AxesImage at 0x7fe295da61d0>

 Résultat sur MNIST avec une couche cachée

In [3]:
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import log_loss
In [4]:
clf_10 = MLPClassifier(solver='sgd', alpha=0, hidden_layer_sizes=(100,), random_state=1, warm_start=True, activation='relu', max_iter=1, learning_rate_init=0.01)

TMAX = 100
score_train = np.zeros(TMAX)
score_test = np.zeros(TMAX)
loss_train = np.zeros(TMAX)
loss_test = np.zeros(TMAX)


for t in range(TMAX):
    clf_10.fit(train_set[0][:10000,:],train_set[1][:10000])    
    score_train[t] = clf_10.score(train_set[0][:10000,:],train_set[1][:10000])
    score_test[t] = clf_10.score(test_set[0],test_set[1])
    loss_train[t] = clf_10.loss_
    loss_test[t] = log_loss(test_set[1],clf_10.predict_proba(test_set[0]))
/home/aurelien/anaconda3/lib/python3.7/site-packages/sklearn/neural_network/multilayer_perceptron.py:562: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (1) reached and the optimization hasn't converged yet.
  % self.max_iter, ConvergenceWarning)
In [5]:
plt.plot(score_train)
plt.plot(score_test)
Out[5]:
[<matplotlib.lines.Line2D at 0x7f9f03075588>]
In [ ]: