18 minute read

Feedforward neural networks: everything you need to know

Emmanuel Ohiri

Emmanuel Ohiri

Feedforward neural networks (FNNs), often referred to as multi-layer perceptrons (MLP), are one of the simplest forms of neural networks. FNNs are typically used for tasks like classification and regression, making them foundational in the subject of neural networks.

In this article, we will explore feedforward neural networks in detail. We'll start with the basics—what they are and how they differ from other types of neural networks. Then, we’ll delve into their architecture, how they are trained, and their real-world applications.

CUDO Compute offers the latest NVIDIA GPUs to support your entire deep learning pipeline. You can start building your Feedforward Neural Network today using the NVIDIA H100 on demand starting from $2.79/hour. Get started now!

What is a feedforward neural network?

A Feedforward Neural Network is an artificial neural network (ANN) that consists of multiple layers of neurons, each fully connected to the next. In this structure, neurons in one layer connect to every neuron in the subsequent layer without any feedback loops or cycles. This architecture is typical of a "feedforward" model, meaning information flows strictly from input to output, passing through any intermediate hidden layers.

feedforward-neural-networks-3 Source: Learn OpenCV

The dense connectivity of an FNN allows the network to capture and model intricate relationships and patterns in the data. As information moves through the network, it learns to recognize increasingly abstract features, like identifying edges before recognizing shapes in an image.

This density and ability to handle different levels of complexity help FNNs in tasks like image recognition and language processing, where understanding both simple and intricate details is important. Before we discuss how FNNs work, we will explore their architecture.

Architecture of a feedforward neural network

The architecture of a feedforward neural network is fundamental to its operation. Since we discussed these in an earlier article, we will briefly run through them. Let's break down the key components:

  • **Layers:**An FNN is organized into layers of interconnected neurons. There are three primary types of layers:
  • Input layer: The first layer receives the raw input data, such as an image's pixel values or numerical features. Each node in this layer represents a single input feature.
  • Hidden layers: These intermediate layers process the input data, extracting increasingly complex and abstract features. The number of hidden layers and the number of neurons in each layer can vary depending on the complexity of the task.
  • Output layer: The final layer produces the network's prediction or output. The number of neurons in this layer depends on the nature of the task. For example, a binary classification problem might have a single output neuron, while a multi-class classification problem might require multiple output neurons.

feedforward-neural-networks-4 Source: Towards Data Science

  • Neurons: Each neuron within a layer receives input from the previous layer, performs a computation, and passes the result to the next layer. This computation involves:
  • Weights: Each connection between neurons has an associated weight, determining its strength. These weights are adjusted during training to optimize the network's performance.
  • Bias: Each neuron has a bias term, which acts as an offset to the weighted sum of inputs.
  • Activation function: A non-linear function applied to the weighted sum of inputs and bias, introducing non-linearity into the network enabling it to learn complex patterns. Common activation functions include sigmoid, ReLU, and tanh.

Understanding the architecture of an FNN lays the groundwork for comprehending how these networks learn and make predictions. Now that we understand the structure of an FNN let's explore how we teach it to perform tasks like image classification.

How to build a feedforward neural network

Training a Feedforward Neural Network involves adjusting its weights and biases to minimize the difference between its predictions and the actual output. This process is typically done using an optimization algorithm like gradient descent.

During training, the network makes predictions on the input data, and the error between the predictions and the actual output is calculated using a loss function. The network then adjusts the weights and biases in the direction that reduces this error, a process known as backpropagation.

Here is an example in which we trained a model to predict if an image is a cat or a dog using a feedforward neural network:

The training process

  1. Importing libraries:

The first step in any project is importing the needed libraries and frameworks. In this project, we will be using the following:

  • NumPy: NumPy is a library for numerical operations, especially useful for working with arrays. We will use it to handle image data and labels as arrays for model training.
  • OS: The OS module interacts with the operating system, mainly for file and directory operations, which helps us access image files within directories and navigate the file system.
  • Matplotlib: Matplotlib is a plotting library in Python that we use to read images from disks into arrays.
import numpy as np

import os

import matplotlib.pyplot as plt

from sklearn.model_selection import train_test_spli

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense, Flatten

from tensorflow.keras.optimizers import SGD

from tensorflow.keras.preprocessing.image import ImageDataGenerator

from skimage.transform import resize
  • Scikit-learn: Scikit-learn is a library for machine learning tasks. Here, it provides a convenient way to split data into training and validation sets.
  • Tensorflow and Keras: TensorFlow is a deep learning framework, and Keras is a high-level API in TensorFlow that simplifies building and training neural networks. Keras makes constructing, compiling, and training deep learning models easy.

To learn more about TensorFlow, visit our CUDO Compute docs for comprehensive tutorials and guides. We also offer virtual machine templates for your TensorFlow projects. Get started now!

  • Skimages: Skimages is a library for image processing that resizes images to a consistent size required by the neural network.

Each of these are important to building the deep learning project.

  1. Loading and preprocessing the data:

Next, we specify the image path to the dataset and separate the images into a training and validation set. Then, we create a data (data) list to store images and their corresponding labels, which helps organize and prepare the data for training.

Neural networks require uniform input dimensions for consistent training, so we made sure all images are resized to 128x128 pixels

# Define dataset path

dataset_path = 'cats-and-dogs'

subdirs = ['train', 'val'] # Subdirectories in the main dataset folder

categories = ['cat', 'dog'] # Categories of images

data = []

img_size = (128, 128)

Then, we loop through the dataset, load each image, resize it, and store it in the data list along with its label, preparing the raw image data by converting it into a format the model can process.

# Load and preprocess the data

for subdir in subdirs:

for category in categories:

path = os.path.join(dataset_path, subdir, category)

class_num = categories.index(category)

for img in os.listdir(path):

 try:

img_path = os.path.join(path, img)

img_array = plt.imread(img_path)

img_resized = resize(img_array, img_size, anti_aliasing=True)

data.append([img_resized, class_num])

except Exception as e:

print(f"Failed to load image {img} in {path}: {e}")

The code iterates over each subdirectory (e.g., 'train', 'val') and category (e.g., 'cat', 'dog'), constructs the full image path, reads the image, resizes it, and appends it to the data list with its label. The try-except block ensures that the process continues without stopping if an image fails to load.

  1. Splitting the data into features and labels:

Next, we split the data into features and labels. The reason for this is machine learning models learn from features (input data) and labels (correct output). X typically refers to the "features" or "input data" that will be fed into the model. In this code, X contains the image data.

Specifically, X is a NumPy array where each element is an image represented as a 3D array of pixel values (width x height x color channels). In this example, each image is resized to 128x128 pixels, with 3 color channels (RGB), so each image in X is of shape (128, 128, 3).

Meanwhile, y refers to the "labels" or "target values" that correspond to the input data. In this code, y contains the labels for the images, indicating which class each image belongs to (e.g., cat or dog).

# Split data into features (x) and labels (y)

X, y = zip(*data)

X = np.array(X)

y = np.array(y)

y is a NumPy array where each element is an integer representing the class of the corresponding image in X. For example if 0 represents "cat" and 1 represents "dog," then yi = 0 would indicate that Xi is an image of a cat.

X, y = zip(*data) separates the image data from their labels.

Let’s break down what that means:

The goal of supervised learning (which this code is an example of) is to train a model to map inputs (x) to outputs (y). The model learns this mapping by finding patterns in the input data corresponding to the correct output labels.

During training, the model uses x to make predictions and compares these predictions to the actual labels in y to calculate a loss (error). This loss is then minimized by adjusting the model's parameters (weights) through backpropagation.

The data (X and y) is split into training and validation sets using train_test_split, which allocates 80% of the data for training and 20% for validation. The model uses the training set to learn and the validation set to check how well it generalizes to unseen data. X_train and y_train are used for training, while X_val and y_val are used for validation.

# Split the data into training and validation sets

X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)

This is done as the code loops through the dataset's images, resizing them and appending them to the data list along with their corresponding labels (0 for cat or 1 for dog).

X, y = zip(*data) unpacks the data list into two separate lists: X for the images and y for the labels. X and y are then converted into NumPy arrays: X = np.array(X) and y = np.array(y), which is necessary because deep learning frameworks like TensorFlow and Keras work efficiently with NumPy arrays.

Here is how to visualize this. Suppose you have three images: an image of a cat, another cat, and a dog.

  • X = image1, image2, image3 where image1 and image2 are cat images, and image3 is a dog image.
  • y = 0, 0, 1, where 0 represents a cat and 1 represents a dog.

The model will learn to associate the patterns in images 1 and 2 with the label 0 (cat) and the patterns in image 3 with the label 1 (dog). These variables are necessary for training a supervised learning model, where the model learns to predict y given X.

  1. Data augmentation:

Next, we use augmentation to increase the diversity of the training data, helping the model generalize better by simulating real-world variations. ImageDataGenerator applies random transformations to the images (e.g., rotations, shifts, flips).

# Data augmentation

datagen = ImageDataGenerator(

rotation_range=20,

width_shift_range=0.2,

height_shift_range=0.2,

shear_range=0.2,

zoom_range=0.2,

horizontal_flip=True,

fill_mode='nearest'

)

datagen.fit(X_train)

datagen is initialized with various augmentation parameters and datagen.fit(X_train) computes any necessary statistics for the augmentations, like normalization.

  1. Building the feedforward model:

The next step is building the feedforward model. The FNN architecture is sufficient for a straightforward binary classification task like distinguishing cats from dogs. The model uses Keras's Sequential API, which allows layers to be stacked linearly.

Then, we create the input layer using the Flatten layer provided by Keras, which converts the 3D image data (128x128x3) into a 1D vector because dense layers expect 1D input, so the image data must be flattened.

# Build the feedforward (MLP) model

model = Sequential()

model.add(Flatten(input_shape=(128, 128, 3))) # Flatten the input

We then add three dense hidden layers that allow the network to learn to identify patterns and relationships in the data. As stated previously Link to the Neural Network Article, each dense layer is a fully connected layer, meaning each neuron in the layer is connected to every neuron in the previous layer. These layers are where the model's "learning" happens.

# Add fully connected hidden layers

model.add(Dense(512, activation='relu'))

model.add(Dense(256, activation='relu'))

model.add(Dense(128, activation='relu'))

The first hidden layer has 512 neurons, and each neuron applies a ReLU (Rectified Linear Unit) activation function to its input. The second hidden layer has 256 neurons, and the third has 128 neurons, both using ReLU activation. ReLU introduces non-linearity to the model, enabling it to learn complex patterns in the data.

Finally, we create the output layer, which is responsible for producing the final predictions.

# Output layer for classification`

model.add(Dense(2, activation='softmax'))

The output layer has only two neurons because this is a binary classification problem (cat vs. dog). Each neuron represents one of the possible classes.

The softmax function is applied to the output, which converts the raw output into probabilities. The sum of the probabilities across the two neurons will be 1, and the model will predict the class with the highest probability.

  1. Compiling the model:

Compiling the model configures it for training by specifying the optimizer, loss function, and metrics. These settings determine how the model learns and how its performance is evaluated.

# Compile the model

model.compile(optimizer=SGD(), loss='sparse_categorical_crossentropy', metrics=['accuracy'])

The optimizer we used is Stochastic Gradient Descent (SGD), which updates the model's weights by computing gradients from the loss function. SGD is a simple and widely used optimization algorithm that is effective for training models with many parameters.

For the loss function, we used sparse_categorical_crossentropy, which is excellent for classification tasks where labels are integers as it measures the error between the predicted probabilities and the true labels, guiding the model’s learning.

Given the simplicity of the model, the accuracy metric was used. Accuracy is a straightforward metric for evaluating how well the model performs on the classification task.

  1. Training the model

model.fit() trains the model on the training data and evaluates it on the validation data after each epoch.

# Train the model

history = model.fit(datagen.flow(X_train, y_train, batch_size=32),

validation_data=(X_val, y_val),

epochs=10,

steps_per_epoch=len(X_train) // 32)

The fit function initiates the training loop:

  • Forward pass: The model makes predictions on the training data.
  • Loss calculation: The difference between predicted and actual labels is computed using the loss function.
  • Backward pass (backpropagation): Gradients are computed for each weight, and the optimizer updates the weights to reduce the loss.
  • Validation: The model’s performance is evaluated on the validation set after each epoch to monitor generalization.

Next, we evaluate the model.

  1. Evaluating the model

We use model.evaluate() to calculate the loss and accuracy on the validation data, which provides an unbiased assessment of the model’s performance on unseen data.


loss, accuracy = model.evaluate(X_val, y_val)

print(f"Validation Accuracy: {accuracy * 100:.2f}%")

The evaluate function runs a forward pass on the validation data and computes the loss and accuracy, clearly indicating the model’s generalization ability.

Finally, we save and load the model.

  1. Saving and loading the model

Saving the model allows you to use it later without retraining, which is useful for deployment or further experimentation. The model and its architecture, weights, and optimizer state are saved to a file that can be loaded later for predictions or continued training.

# Save the model

model.save('cats_dogs_feedforward.keras')

# Load the model to ensure it works

from tensorflow.keras.models import load_model

model = load_model('cats_dogs_feedforward.keras')

Then, we reload the saved model from a file to verify that it was saved correctly and can be reused for future predictions or further training.

Here is the entire code:

import numpy as np

import os

import matplotlib.pyplot as plt

from sklearn.model_selection import train_test_split

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense, Flatten

from tensorflow.keras.optimizers import SGD

from tensorflow.keras.preprocessing.image import ImageDataGenerator

from skimage.transform import resize

# Define dataset path

dataset_path = 'cats-and-dogs'

subdirs = ['train', 'val'] # Subdirectories in the main dataset folder

categories = ['cat', 'dog'] # Categories of images

data = []

img_size = (128, 128)

# Load and preprocess the data

for subdir in subdirs:

for category in categories:

path = os.path.join(dataset_path, subdir, category)

class_num = categories.index(category)

for img in os.listdir(path):

try:

img_path = os.path.join(path, img)

img_array = plt.imread(img_path)

img_resized = resize(img_array, img_size, anti_aliasing=True)

data.append([img_resized, class_num])

except Exception as e:

print(f"Failed to load image {img} in {path}: {e}")

# Split data into features (X) and labels (y)

X, y = zip(*data)

X = np.array(X)

y = np.array(y)

# Split the data into training and validation sets

X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)

print(f"Loaded {len(data)} images")

print(f"Training set: {len(X_train)} images")

print(f"Validation set: {len(X_val)} images")

# Data augmentation

datagen = ImageDataGenerator(

rotation_range=20,

width_shift_range=0.2,

height_shift_range=0.2,

shear_range=0.2,

zoom_range=0.2,

horizontal_flip=True,

fill_mode='nearest'

)

datagen.fit(X_train)

# Build the feedforward (MLP) model

model = Sequential()

model.add(Flatten(input_shape=(128, 128, 3))) # Flatten the input

# Add fully connected hidden layers

model.add(Dense(512, activation='relu'))

model.add(Dense(256, activation='relu'))

model.add(Dense(128, activation='relu'))

# Output layer for classification

model.add(Dense(2, activation='softmax'))

# Compile the model

model.compile(optimizer=SGD(), loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# Train the model

history = model.fit(datagen.flow(X_train, y_train, batch_size=32),

validation_data=(X_val, y_val),

epochs=10,

steps_per_epoch=len(X_train) // 32)

# Evaluate the model

loss, accuracy = model.evaluate(X_val, y_val)

print(f"Validation Accuracy: {accuracy * 100:.2f}%")

# Save the model

model.save('cats_dogs_feedforward.keras')

# Load the model to ensure it works

from tensorflow.keras.models import load_model

model = load_model('cats_dogs_feedforward.keras')

Applications of feedforward neural networks

The versatility of feedforward neural networks has led to their widespread adoption in various fields. Their ability to learn complex patterns and make accurate predictions has proven invaluable across diverse applications. Let’s explore some of the prominent areas where FNNs are making a significant impact.

Image recognition and classification

FNNs excel at image recognition and classification tasks, enabling machines to “see” and interpret visual information. They can be trained to identify objects, scenes, and even human emotions within images. This capability has applications in various domains, such as:

  • Self-driving cars: FNNs can process images from cameras to identify pedestrians, traffic signs, and other vehicles, aiding in safe navigation.
  • Medical diagnosis: FNNs can analyze medical images, such as X-rays and MRIs, to detect abnormalities and assist in diagnosis.
  • Quality control: FNNs can inspect products on assembly lines, identifying defects and ensuring quality standards.

feedforward-neural-networks-5

Natural language processing (NLP)

FNNs also contribute significantly to natural language processing tasks, enabling machines to understand and generate human language. They can be employed for:

  • Sentiment analysis: FNNs can determine the sentiment or emotion expressed in a piece of text, such as a customer review or a social media post.
  • Machine translation: FNNs can translate text from one language to another, facilitating communication across linguistic barriers.
  • Chatbots: FNNs can power chatbots, providing automated customer support and answering frequently asked questions.

Tabular data analysis

FNNs are widely used for analyzing tabular data, which is common in business and scientific applications. They can be utilized for:

  • Customer churn prediction: FNNs can predict which customers are likely to churn, enabling businesses to take proactive measures to retain them.
  • Credit risk assessment: FNNs can assess the creditworthiness of loan applicants, helping financial institutions make informed lending decisions.
  • Sales forecasting: FNNs can analyze historical sales data to predict future sales trends, aiding businesses in inventory management and resource allocation.

Other applications

The applications of FNNs extend beyond the fields mentioned above. They have also found use in:

  • Speech recognition: FNNs can transcribe spoken language into text, enabling voice-controlled interfaces and dictation software.
  • Recommender systems: FNNs can analyze user preferences and behavior to suggest relevant products or content, enhancing user experience.
  • Game playing: FNNs can learn to play complex games, such as chess and Go, at a superhuman level, demonstrating their ability to strategize and make decisions.

The versatility and adaptability of feedforward neural networks have made them an essential tool in artificial intelligence. Stay updated with our docs and resources, try different neural network architectures, and contact us to get access to the latest NVIDIA GPUs on demand and on reserve at CUDO Compute.

Starting from $1.40/hr

NVIDIA H100s are now available on-demand

A cost-effective option for AI, VFX and HPC workloads. Prices starting from $1.40/hr

Subscribe to our Newsletter

Subscribe to the CUDO Compute Newsletter to get the latest product news, updates and insights.