See the latest book content here.

2 Logistic Regression Type Neural Networks

Learning outcomes from this chapter

  • Logistic regression view as a shallow Neural Network
  • Maximum Likelihood, loss function, cross-entropy
  • Softmax regression/ multinomial regression model as a Multiclass Perceptron.
  • Optimisation procedure: gradient descent, stochastic gradient descent, Mini-Batches
  • Understand the forward pass and backpropogration step
  • Implementation from first principles

2.1 Logistic regression view as a shallow Neural Network

2.1.1 Sigmoid function

The sigmoid function σ()σ(), also known as the logistic function, is defined as follows:

zR,σ(z)=11+ez]0,1[zR,σ(z)=11+ez]0,1[

Sigmoid functionFigure 2.1: Sigmoid function

z <- seq(-5, 5, 0.01)
sigma = 1 / (1 + exp(-z))

plot(sigma~z,type="l",ylab=expression(sigma(z)))

2.1.2 Logistic regression

The logistic regression is a probabilistic model that aims to predict the probability that the outcome variable yy is 1. It is defined by assuming that y|x;θBernoulli(ϕ)y|x;θBernoulli(ϕ). Then, the logistic regression is defined by applying the soft sigmoid function to the linear predictor θTxθTx:

ϕ=hθ(x)=p(y=1|x;θ)=11+exp(θTx)=σ(θTx)ϕ=hθ(x)=p(y=1|x;θ)=11+exp(θTx)=σ(θTx)

The logistic regression is also presented:

Logit[hθ(x)]=logit[p(y=1|x;θ)]=θTxLogit[hθ(x)]=logit[p(y=1|x;θ)]=θTx where Logit(p)=log(p1p)Logit(p)=log(p1p).

Remark about notation

  • x=(x0,,xd)Tx=(x0,,xd)T represent a vector of d+1d+1 features/predictors and by convention x0=1x0=1
  • θ=(θ0,,θd)Tθ=(θ0,,θd)T is the vector of parameter related to the features xx
  • θ0θ0 is called the intercept by the statistician and named bias by the computer scientist (noted generally bb)

2.1.3 Logistic regression for classification

Let’s play with a simple example with 10 points, and two classes (red and blue)

Classify red and blue pointsFigure 2.2: Classify red and blue points

clr1 <- c(rgb(1,0,0,1),rgb(0,0,1,1))
clr2 <- c(rgb(1,0,0,.2),rgb(0,0,1,.2))
x <- c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y <- c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z <- c(1,1,1,1,1,0,0,1,0,0)
df <- data.frame(x,y,z)
plot(x,y,pch=19,cex=2,col=clr1[z+1])

In order to classify the points, we run a logistic regression to get predictions

model <- glm(z~x+y,data=df,family=binomial)
#summary(model)

Then, we use the fitted model to define our classifier which is defined as attributed the class that is the most likely.

pred_model <- function(x,y){
 predict(model,newdata=data.frame(x=x,
 y=y),type="response")>.5
}

Using our decision rule, we can visualise the produced partition of the space.

Partition using the logistic modelFigure 2.3: Partition using the logistic model

x_grid<-seq(0,1,length=101)
y_grid<-seq(0,1,length=101)
z_grid <- outer(x_grid,y_grid,pred_model)
image(x_grid,y_grid,z_grid,col=clr2)
points(x,y,pch=19,cex=2,col=clr1[z+1])

2.1.4 Likelihood of the logistic model

The maximum likelihood estimation procedure is generally used to estimate the parameters of the models θ0,,θdθ0,,θd.

p(y|x;θ)={hθ(x)if y=1, and1hθ(x)otherwise.p(y|x;θ)={hθ(x)if y=1, and1hθ(x)otherwise. which could be written as

p(y|x;θ)=hθ(x)y(1hθ(x))1y,p(y|x;θ)=hθ(x)y(1hθ(x))1y,

Consider now the observation of mm training samples denoted by {(x(1),y(1)),,(x(m),y(m))}{(x(1),y(1)),,(x(m),y(m))} as i.i.d. observations from the logistic model. The likelihood is

L(θ)=mi=1p(y(i)|x(i);θ)=mi=1hθ(x(i))y(1hθ(x(i)))1yL(θ)=mi=1p(y(i)|x(i);θ)=mi=1hθ(x(i))y(1hθ(x(i)))1y

Then, the following log likelihood is maximized to the estimates of θθ:

(θ)=log L(θ)=mi=1[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))](θ)=log L(θ)=mi=1[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))]

2.1.5 Shallow Neural Network

The logistic model can ve viewed as a shallow Neural Network.

This figure used here the same notation as the regression logistic model presented by the statistical point of view. However, in the following we will adopt the notation used the most frequently in deep learning framework.

Figure 2.4: Shallow Neural Network

Shallow Neural Network

In this figure, z=wTz+b=w1x1++wdxd+bz=wTz+b=w1x1++wdxd+b is the linear combination of the dd features/predictors and a=σ(Z)a=σ(Z) is called the activation function which is the non-linear part of the Neural Network to get a close prediction ˆyy^yy.

Remark: In the sequel, we will adopt the following notation:

  • x=(x1,,xd)Tdx=(x1,,xd)TRd representz a vector of dd features/predictors
  • w=(w1,,wd)Tw=(w1,,wd)T is the vector of weight related to the features xx
  • bb is called the biais
  • We consider the observations of mm training samples denoted by {(x(1),y(1)),,(x(m),y(m))}{(x(1),y(1)),,(x(m),y(m))}

2.1.6 Entropy, Cross-entropy and Kullback-Leibler

Let’s first talk about the cross-entropy which is widely used as loss function for classification purpose.

Cross Entropy (CE) is related to the entropy and the kullback-Leibler risk.

The entropy of a discrete probability distribution p=(p1,,pn)p=(p1,,pn) is defined as

H(p)=H(p1,,pn)=ni=1pilogpiH(p)=H(p1,,pn)=ni=1pilogpi which is a ``measurement of the disorder or randomness of a system’’.

Kullback and Leibler known also as KL divergence quantifies how similar a probability distribution pp is to a candidate distribution qq.

KL(p;q)=ni=1pilogpiqiKL(p;q)=ni=1pilogpiqi Note that the KLKL divergence is not a distance measure as KL(p;q)KL(q;p)KL(p;q)KL(q;p). KLKL is non-negative and zero if and only if pi=qipi=qi for all ii.

One can easily show that

KL(p;q)=ni=1pilog1qicross entropyH(p)KL(p;q)=ni=1pilog1qicross entropyH(p)

where the first term of the right part is the cross entropy:

CE(p,q)=ni=1pilog1qi=ni=1pilogqiCE(p,q)=ni=1pilog1qi=ni=1pilogqi And we have the relation CE(p,q)=H(p)+KL(p;q)CE(p,q)=H(p)+KL(p;q)

Thus, the cross entropy can be interpreted as the uncertainty implicit in H(p)H(p) plus the likelihood that the distribution pp could have be generated by the distribution qq.

2.1.7 Mathematical expression of the Neural Network:

For one example x(i)x(i), the ouput of this Neural Network is given by:

ˆy(i)=σ(wTx(i)+b)a(i)activation function,^y(i)=σ(wTx(i)+b)a(i)activation function,

where σ()σ() is the sigmoid function.

We aim to get the weight ww and the biais bb such that ˆy(i)y(i)^y(i)y(i).

The loss function used for this network is the cross-entropy which is defined for one sample (x,y)(x,y):

L(ˆy,y)=(ylogˆy+(1y)log(1ˆy))L(^y,y)=(ylog^y+(1y)log(1^y))

Then the Cost function for the entire training data set is:

J(w,b)=1mmi=1L(ˆy(i),y(i))J(w,b)=1mmi=1L(^y(i),y(i))

Further, it is easy to see the connection with the log-likelihood function of the logistic model:

J(w,b)=1mmi=1L(ˆy(i),y(i))=1mmi=1[y(i)loga(i))+(1y(i))log(1a(i))]=1mmi=1[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))]1m(θ)J(w,b)=1mmi=1L(^y(i),y(i))=1mmi=1[y(i)loga(i))+(1y(i))log(1a(i))]=1mmi=1[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))]1m(θ)

where bθ0bθ0 and w(θ1,,θd)w(θ1,,θd).

The optimization step will be carried using Gradient Descent procedures and extension which will be briefly presented in the sub-section Optimization

2.2 Softmax regression

A softmax regression, also called a multiclass logistic regression, is used to generalized logistic regression when there are more than 2 outcome classes (k=1,,Kk=1,,K). The outcome variable is a discrete variable yy which can take one of the KK values, y{1,,K}y{1,,K}. The multinomial regression model is also a GLM (Generalized Linear Model) where the distribution of the outcome yy is a Multinomial(1,π)(1,π) where π=(ϕ1,,ϕK)π=(ϕ1,,ϕK) is a vector with probabilities of success for each category. This Multinomial(1,π)(1,π) is more precisely called categorical distribution.

The multinomial regression model is parameterize by K1K1 parameters, ϕ1,,ϕKϕ1,,ϕK, where ϕi=p(y=i;ϕ)ϕi=p(y=i;ϕ), and ϕK=p(y=K;ϕ)=1K1i=1ϕiϕK=p(y=K;ϕ)=1K1i=1ϕi.

By convention, we set θK=0θK=0, which makes the Bernoulli parameter ϕiϕi of each class ii be such that

ϕi=exp(θTix)Kj=1exp(θTjx)ϕi=exp(θTix)Kj=1exp(θTjx) where θ1,,θK1 d+1θ1,,θK1 Rd+1 are the parameters of the model. This model is also called softmax regression and generalize the logistic regression. The output of the model is the estimated probability that p(y=i|x;θ)p(y=i|x;θ), for every value of i=1,,Ki=1,,K.

2.2.1 Multinomial regression for classification

We illustrate the Multinomial model by considering three classes: red, yellow and blue.

Classify for three color pointsFigure 2.5: Classify for three color points

clr1 <- c(rgb(1,0,0,1),rgb(1,1,0,1),rgb(0,0,1,1))
clr2 <- c(rgb(1,0,0,.2),rgb(1,1,0,.2),rgb(0,0,1,.2))
x <- c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y <- c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z <- c(1,2,2,2,1,0,0,1,0,0)
df <- data.frame(x,y,z)
plot(x,y,pch=19,cex=2,col=clr1[z+1])

One can use the R package to run a mutinomial regression model

library(nnet)
model.mult <- multinom(z~x+y,data=df)
# weights:  12 (6 variable)
initial  value 10.986123 
iter  10 value 0.794930
iter  20 value 0.065712
iter  30 value 0.064409
iter  40 value 0.061612
iter  50 value 0.058756
iter  60 value 0.056225
iter  70 value 0.055332
iter  80 value 0.052887
iter  90 value 0.050644
iter 100 value 0.048117
final  value 0.048117 
stopped after 100 iterations

Then, the output gives a predicted probability to the three colours and we attribute the color that is the most likely.

pred_mult <- function(x,y){
res <- predict(model.mult,
newdata=data.frame(x=x,y=y),type="probs")
apply(res,MARGIN=1,which.max)
}
x_grid<-seq(0,1,length=101)
y_grid<-seq(0,1,length=101)
z_grid <- outer(x_grid,y_grid,FUN=pred_mult)

We can now visualize the three regions, the frontier being linear, and the intersection being the equiprobable case.

Classifier using multinomial modelFigure 2.6: Classifier using multinomial model

image(x_grid,y_grid,z_grid,col=clr2)
points(x,y,pch=19,cex=2,col=clr1[z+1])

2.2.2 Likelihood of the softmax model

The maximum likelihood estimation procedure consists of maximizing the log-likelihood:

(θ)=mi=1logp(y(i)|x(i);θ)=mi=1logKl=1(eθTlx(i)Kj=1eθTjx(i))1{y(i)=l}(θ)=mi=1logp(y(i)|x(i);θ)=mi=1logKl=1eθTlx(i)Kj=1eθTjx(i)1{y(i)=l}

2.2.3 Softmax regression as shallow Neural Network

The Softmax regression model can be viewed as a shallow Neural Network.

In this representation, there are KK neurons where each neuron is defined by his own set of weights widwiRd and a bais term bibi. The linear part is denoted by zi=wTix+bizi=wTix+bi and the non linear part (activation part) is σi=ai=exp(zi)Kj=1exp(zj)σi=ai=exp(zi)Kj=1exp(zj). Note that the denominator of the activation part is defined using the weights from the other neurons. The output is a vector of probabilities (a1,,aK)(a1,,aK) and the function is used for classification purpose:

ˆy=argmax  aii{1,,K}

2.2.4 Loss function: cross-entropy for categorical variable

Let consider first one training sample (x,y). The cross entropy loss for categorical response variable, also called Softmax Loss is defined as:

CE=Ki=1˜yilogp(y=i)=Ki=1˜yilogai=Ki=1˜yilog(exp(zi)Kj=1exp(zj)) where ˜yi=1{y=i} is a binary variable indicating if y is in the class i.

This expression can be rewritten as

CE=logKi=1(exp(zi)Kj=1exp(zj))1{y=i}

Then, the cost function for the m training samples is defined as

J(w,b)=1mmi=1logKk=1(exp(z(i)k)Kj=1exp(z(i)j))1{y(i)=k}1m(θ)

2.3 Optimisation

2.3.1 Gradient Descent

Consider unconstrained, smooth convex optimization

minθd f(θ),


Algorithm : Gradient Descent


  1. Choose initial point θ(0)Rd
  2. Repeat θ(k+1)=θ(k)α.f(θ(k)),  k=1,2,3,
  3. Stop at some point: ||θ(k+1)θ(k)||22<ϵ

Here, α is called the learning rate.

Suppose that we want to find x that minimizes: f(x)=1.2(x2)2+3.2

Closed from solution (red)Figure 2.7: Closed from solution (red)

f.x <- function(x) 1.2*(x-2)**2+3.2
curve(1.2*(x-2)**2+3.2,0,4,ylab="fx)")
abline(v=2,col="red")

In general, we cannot find a closed form solution, but can compute f(x)

simple.grad.des <- function(x0,alpha,epsilon=0.00001,max.iter=300){
  tol <- 1; xold <- x0; res <- x0; iter <- 1
  while (tol>epsilon & iter < max.iter){
    xnew <- xold - alpha*2.4*(xold-2)
    tol <- abs(xnew-xold)
    xold <- xnew
  res <- c(res,xnew)
  iter <- iter +1
  }
 return(res)
}
result <- simple.grad.des(0,0.01,max.iter=200)
#result[length(result)]

Convergence with a learning rate=0.01

alpha=0.01Figure 2.8: alpha=0.01

curve(1.2*(x-2)**2+3.2,0,4,ylab="fx)")
abline(v=2,col="red")
points(result,f.x(result),col="blue")

Convergence with a learning rate=0.83

alpha=0.83Figure 2.9: alpha=0.83

result2 <- simple.grad.des(0,0.83,max.iter=200)
#result2[length(result2)]
curve(1.2*(x-2)**2+3.2,0,4,ylab="fx)")
abline(v=2,col="red")
points(result2,f.x(result2),col="blue",type="o")

2.3.2 Gradient Descent for logistic regression

Given (x(i),y(i))×{0,1} for i=1,,m, consider the cross-entropy loss function for this data set:

f(w)=1mmi=1(y(i)wTx(i)+log(1+exp(wTx(i))))=mi=1fi(w)

The gradient is

f(w)=1mmi=1(p(i)(w)y(i))x(i)

where p(i)(w))=p(Y=1|x(i),w)=exp(wTx(i))/(1+exp(wTx(i))),   i=1,,m


Algorithm : Batch Gradient Descent


  1. Initialize w=(0,,0)
  2. Repeat until convergence
    • Let g=(0,,0) be the gradient vector
    • for i=1:m do p(i)=exp(wTx(i))/(1+exp(wTx(i))) error(i)=p(i)yi g=g+error(i).w(i)
    • end
    w=wα.g/m
  3. End repeat until convergence

Note that algorithm uses all samples to compute the gradient. This approach is called batch gradient descent.

2.3.3 Stochastic gradient descent


Algorithm : Stochastic Gradient Descent


  1. Initialize w=(0,,0)

  2. Repeat until convergence

    Pick sample i

    p(i)=exp(wTx(i))/(1+exp(wTx(i)))

    error(i)=p(i)yi

    w=wα(error(i)×x(i))

  3. End repeat until convergence


Coefficient w is updated after each sample.

Remark: The gradient computation f(w)=mi=1(p(i)(w)y(i))x(i) is doable when m is moderate, but not when m500million.

  • One batch update costs O(md)

  • One stochastic update costs O(d)

2.3.4 Mini-Batches

In practice, mini-batch is often used to:

  1. Compute gradient based on a small subset of samples
  2. Make update to coefficient vector

2.3.5 Example with logistic regression d=2

  • Simulate some m samples from true model:

p(Y=1|x(i),w)=11+exp(w1x(i)1w2x(i))

set.seed(10)
m <- 5000 ;d <- 2 ;w <- c(0.5,-1.5)
x <- matrix(rnorm(m*2),ncol=2,nrow=m)
ptrue <- 1/(1+exp(-x%*%matrix(w,ncol=1)))
y <- rbinom(m,size=1,prob = ptrue)
(w.est <- coef(glm(y~x[,1]+x[,2]-1,family=binomial)))
##    x[, 1]    x[, 2] 
##  0.557587 -1.569509
  • The cross-entropy loss for this dataset
Cost.fct <- function(w1,w2) {
  w <- c(w1,w2)
  cost <- sum(-y*x%*%matrix(w,ncol=1)+log(1+exp(x%*%matrix(w,ncol=1))))
  return(cost)
}

Contour plot of the Cost functionFigure 2.10: Contour plot of the Cost function

w1 <- seq(0, 1, 0.05)
w2 <- seq(-2, -1, 0.05)
cost <- outer(w1, w2, function(x,y) mapply(Cost.fct,x,y))
contour(x = w1, y = w2, z = cost)
points(x=w.est[1],y=w.est[2],col="black",lwd=2,lty=2,pch=8)
  • Implementation of Batch Gradient Descent
sigmoid <- function(x) 1/(1+exp(-x))

batch.GD <- function(theta,alpha,epsilon,iter.max=500){
  tol <- 1
  iter <-1
  res.cost <- Cost.fct(theta[1],theta[2])
  res.theta <- theta
  while (tol > epsilon & iter<iter.max) {
      error <- sigmoid(x%*%matrix(theta,ncol=1))-y
      theta.up <- theta-as.vector(alpha*matrix(error,nrow=1)%*%x)
      res.theta <- cbind(res.theta,theta.up)
      tol <- sum((theta-theta.up)**2)^0.5
      theta <- theta.up
      cost <- Cost.fct(theta[1],theta[2])
      res.cost <- c(res.cost,cost)
      iter <- iter +1
    }
  result <- list(theta=theta,res.theta=res.theta,res.cost=res.cost,iter=iter,tol.theta=tol)
  
  return(result)
}
dim(x);length(y)
## [1] 5000    2
## [1] 5000

Convergence Batch Gradient DescentFigure 2.11: Convergence Batch Gradient Descent

theta0 <- c(0,-1); alpha=0.001
test <- batch.GD(theta=theta0,alpha,epsilon = 0.0000001)
plot(test$res.cost,ylab="cost function",xlab="iteration",main="alpha=0.01",type="l")
abline(h=Cost.fct(w.est[1],w.est[2]),col="red")

Convergence of BGD Figure 2.12: Convergence of BGD

contour(x = w1, y = w2, z = cost)
points(x=w.est[1],y=w.est[2],col="black",lwd=2,lty=2,pch=8)
record <- as.data.frame(t(test$res.theta))
points(record,col="red",type="o")
  • Implementation of Stochastic Gradient Descent
Stochastic.GD <- function(theta,alpha,epsilon=0.0001,epoch=50){
  epoch.max <- epoch
  tol <- 1
  epoch <-1
  res.cost <- Cost.fct(theta[1],theta[2])
  res.cost.outer <- res.cost
  res.theta <- theta
  while (tol > epsilon & epoch<epoch.max) {
    for (i in 1:nrow(x)){
      errori <- sigmoid(sum(x[i,]*theta))-y[i]
      xi <- x[i,]
      theta.up <- theta-alpha*errori*xi
      res.theta <- cbind(res.theta,theta.up)
      tol <- sum((theta-theta.up)**2)^0.5
      theta <- theta.up
      cost <- Cost.fct(theta[1],theta[2])
      res.cost <- c(res.cost,cost)
    }
    epoch <- epoch +1
    cost.outer <- Cost.fct(theta[1],theta[2])
    res.cost.outer <- c(res.cost.outer,cost.outer)
    }
  result <- list(theta=theta,res.theta=res.theta,res.cost=res.cost,epoch=epoch,tol.theta=tol)
}

test.SGD <- Stochastic.GD(theta=theta0,alpha,epsilon = 0.0001,epoch=10)

Convergence Stochastic Gradient DescentFigure 2.13: Convergence Stochastic Gradient Descent

plot(test.SGD$res.cost,ylab="cost function",xlab="iteration",main="alpha=0.01",type="l")
abline(h=Cost.fct(w.est[1],w.est[2]),col="red")

Convergence of Stochastic Gradient Descent Figure 2.14: Convergence of Stochastic Gradient Descent

contour(x = w1, y = w2, z = cost)
points(x=w.est[1],y=w.est[2],col="black",lwd=2,lty=2,pch=8)
record2 <- as.data.frame(t(test.SGD$res.theta))
points(record2,col="red",lwd=0.5)
  • Implementation of mini batch Gradient Descent
Mini.Batch <- function (theta,dataTrain, alpha = 0.1, maxIter = 10, nBatch = 2, seed = NULL,intercept=NULL) 
{
    batchRate <- 1/nBatch
    dataTrain <- matrix(unlist(dataTrain), ncol = ncol(dataTrain), byrow = FALSE)
    set.seed(seed)
    dataTrain <- dataTrain[sample(nrow(dataTrain)), ]
    set.seed(NULL)
    
    res.cost <- Cost.fct(theta[1],theta[2])
    res.cost.outer <- res.cost
    res.theta <- theta
    
    if(!is.null(intercept)) dataTrain <- cbind(1, dataTrain)
    
    temporaryTheta <- matrix(ncol = length(theta), nrow = 1)
    theta <- matrix(theta,ncol = length(theta), nrow = 1)
    
    for (iteration in 1:maxIter ) {
        if (iteration%%nBatch == 1 | nBatch == 1) {
            temp <- 1
            x <- nrow(dataTrain) * batchRate
            temp2 <- x
        }
        batch <- dataTrain[temp:temp2, ]
        inputData <- batch[, 1:ncol(batch) - 1]
        outputData <- batch[, ncol(batch)]
        rowLength <- nrow(batch)
        temp <- temp + x
        temp2 <- temp2 + x
        error <- matrix(sigmoid(inputData %*% t(theta)),ncol=1) - outputData
        for (column in 1:length(theta)) {
            term <- error * inputData[, column]
            gradient <- sum(term)#/rowLength
            temporaryTheta[1, column] = theta[1, column] - (alpha * 
                gradient)
        }
        
        theta <- temporaryTheta
        res.theta <- cbind(res.theta,as.vector(theta))
        cost.outer <- Cost.fct(theta[1,1],theta[1,2])
        res.cost.outer <- c(res.cost.outer,cost.outer)
        
    }
    result <- list(theta=theta,res.theta=res.theta,res.cost.outer=res.cost.outer)
    return(result)
}
theta0 <- c(0,-1); alpha=0.001
data.Train <- cbind(x,y)
test.miniBatch <- Mini.Batch(theta=theta0,dataTrain=data.Train, alpha = 0.001, maxIter = 100, nBatch = 10, seed = NULL,intercept=NULL)
##Result frm Mini-Batch
test.miniBatch$theta
##           [,1]      [,2]
## [1,] 0.5515469 -1.561252

Convergence Mini BatchFigure 2.15: Convergence Mini Batch

plot(test.miniBatch$res.cost.outer,ylab="cost function",xlab="iteration",main="alpha=0.001",type="l")
abline(h=Cost.fct(w.est[1],w.est[2]),col="red")

Convergence of Stochastic Gradient Descent Figure 2.16: Convergence of Stochastic Gradient Descent

contour(x = w1, y = w2, z = cost)
points(x=w.est[1],y=w.est[2],col="black",lwd=2,lty=2,pch=8)
record3 <- as.data.frame(t(test.miniBatch$res.theta))
points(record3,col="red",lwd=0.5,type="o")

2.4 Chain rule

The univariate chain rule and the multivariate chain rule are the key concepts to calculate the derivative of cost with respect to any weight in the network. In the following a refresher of the different chain rules.

2.4.1 Univariate Chain rule

  • Univariate chain rule

f(g(w))w=f(g(w))g(w).g(w)w

2.4.2 Multivariate Chain Rule

  • Part I: Let z=f(x,y), x=g(t) and y=h(t), where f,g and h are differentiable functions. Then z=f(x,y)=f(g(t),h(t))) is a function of t, and

dzdt=dfdt=fx(x,y)dxdt+fy(x,y)dydt=fxdxdt+fydydt.

  • Part II:
  1. Let z=f(x,y), x=g(s,t) and y=h(s,t), where f,g and h are differentiable functions. Then z is a function of s and t, and

zs=fxxs+fyys

and

zt=fxxt+fyyt

  1. Let z=f(x1,x2,,xm) be a differentiable function of m variables, where each of the xi is a differentiable function of the variables t1,t2,,tn. Then z is a function of the ti, and

zti=fx1x1ti+fx2x2ti++fxmxmti.

2.5 Forward pass and backpropagation procedures

Backpropagation is the key tool adopted by the neural network community to update the weights. This method exploits the derivative with respect to each weight w using the chain rule (univariate and multivariate rules)

2.5.1 Example with the logistic model using cross-entropy loss

The logistic model can be viewed as a simple neural network with no hidden layer and only a single output node with a sigmoid activation function. The sigmoid activation function σ() is applied to the linear combination of the features z=wTx+b and provides the predicted value a that represent the probability that the input x belongs to class one.

Figure 2.17: Forward pass: Logistic model

Forward pass: Logistic model

Forward pass: the forward propagation step consists to get predictions for the training samples and compute the error through a loss function in order to further adapt the weights w and the bias b to decrease the error. This forward pass is going through the following equations:

  • z(i)=wTx(i)+b

  • a(i)=σ(z(i))=11+ez(i)   i=1,,m

  • L=J(w,b)=mi=1y(i)log(a(i))+(1y(i))log(1a(i))

The cost L=J(w,b) is the error we want to reduce by adjusting the weights w and the bais. Variations of the gradient descent algorithm are exploited to update iteratively the parameters. Thus, we have to derive the equations for the gradients on the loss function in order to propagate back the error to adapt the model parameters w and b.

Backward pass based on computation graph:

The chain rule is used and generally illustrated through a computation graph:

Figure 2.18: backpropagation: Logistic model

backpropagation: Logistic model

First to simplify this illustration, remind that:

J(w,b)=1mmi=1L(ˆy(i),y(i))=1mmi=1[y(i)loga(i))+(1y(i))log(1a(i))]

Thus J(w,b)w=1mmi=1L(ˆy(i),y(i))w

To get L(ˆy(i),y(i))w the chain rule is used by considering one sample (x,y) (the notation (i) is ommitted).

Computation graphs are mainly exploited to show dependencies to derive easely the equations for the gradients. Thus to compute the gradient of the cost (lost function), one only need to go back the computation graph and multiply the gradients by each other:

L(ˆy,y)w=J(w,b)a.az.zw

where a=σ(z) and z=b+wTx.

  • J(w,b)a=ya+1y1a=aya(1a)
  • σ(z)z=σ(z)(1σ(z))=a(1a)
  • zw=x

Thus,

L(ˆy,y)w=x(ay)

and so, J(w,b)w=1mmi=1x(i)(σ(z(i))y(i)) In the same vein, it follows

J(w,b)b=1mmi=1(σ(z(i))y(i))

2.5.2 Updating weights using Backpropagation

For neural network framework, the weights are updated using gradient descent concepts

w=wαJ(w,b)w

The main steps for updating weights are

1. Take a batch of training sample
2. Forward propagation to get the corresponding cost 
3. Backpropagate the cost to get the gradients
4. update the weights using the gradients
5. Repeat step 1 to 4 for a number of iterations

2.6 Backpropagation for the Softmax Shallow Network

2.6.1 Remind some notations

We consider K class: y(i){1,,K}. Given a sample x we want to estimate p(y=k|x) for each k=1,,K. The softmax model is defined by σi=ai=exp(zi)Kj=1exp(zj), with ai=p(y=i|x,W), zi=bi+wTix and we write W to denote all the weights of our network. Then, W=(w1,,wK) a d×K matrix obtained by concatenating w1,,wK into columns.

2.6.2 Chain rule using cross-entropy loss

Let consider one training sample (x,y). The cross entropy loss is

CE=Ki=1˜yilogai

where ˜yi=1{y=i} is a binary variable indicating if y is in the class i.

To get CEwj (j=1,,K) we need to use the multivariate chain rule:

First, we derive

CEzj=KkCEak.akzj=CEaj.ajzjKkjCEak.akzj

  • CEaj=˜yjaj

  • if i=j

aizj=eziKk=1ezkzj=ezi(Kk=1ezkezj)(Kk=1ezk)2=ezjKk=1ezk×(Kk=1ezkezj)Kk=1ezk=ai(1aj)

  • if ij

aizj=eziKk=1ezkzj=0ezjezi(Kk=1ezk)2=ezjKk=1ezk×eziKk=1ezk=aj.ai

So we can rewrite it as

aizj={ai(1aj)ifi=jaj.aiifij

Thus, we get

CEzj=CEaj.ajzjKkjCEak.akzj=˜yj(1aj)+Kkj˜yjak=˜yj+ajKk˜yk=aj˜yj

We can now derive the gradient for the weights as:

CEwj=KkCEzk.zkwj=(aj˜yj)x

In the same way, we get

CEbj=KkCEzk.zkbj=(aj˜yj)

2.6.3 Computation Graph could help

Let consider a simple example with K=3 (y{1,2,3}) and two features (x1 and x2). The computational graph for this softmax neural network model help us to visualize dependencies between nodes and then to derive the gradient of the cost (loss) in respect to each parameter (wj 2 and bj, j=1,2,3).

Figure 2.19: Computation Graph: softmax

Computation Graph: softmax

Let’s write as an example for Lw2,1:

Lw2,1=3i=1(Lai)(aiz2)(z2w2,1)=(La1)(a1z2)(z2w2,1)+(La2)(a2z2)(z2w2,1)+(La3)(a3z2)(z2w2,1)

In fact, we are summing up the contribution of the change of w2,1 over different “paths” (in red from figure above). When we change w2,1;  a1 a2 and a3 changes as a result. Then, the change of  a1 a2 and a3 affects L. We sum up all the changes w2,1 produced over  a1 a2 and a3 to L.

import numpy as np
import matplotlib.pyplot as plt
import sympy
from scipy import optimize

Page built: 2021-03-04 using R version 4.0.3 (2020-10-10)