Files
neural-network/nnetwork.ipynb
Loïc GUEZO 65e7bd3cfc feat(nnetwork.ipynb): Step 1, 2, 3
- Initialization
- Activation Functions
- Forward Pass Function

also change `self.last_output` to `last_output` in network.py
2025-06-03 10:27:17 +02:00

292 lines
9.0 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
{
"cells": [
{
"cell_type": "markdown",
"id": "9b3f1635",
"metadata": {},
"source": [
"# Neural Network"
]
},
{
"cell_type": "markdown",
"id": "478651c8",
"metadata": {},
"source": [
"## What is a *Neuron* (artificial)\n",
"\n",
"> **disclaimer**: I'm no neurologist. This explanation below is only based on online research.\n",
"\n",
"An **artificial neuron** works *similarly* to a **biological neuron** in the way it processes information.\n",
"\n",
"In a brain (like yours), a **biological neuron** receives **electrical signals** from others, processes them, and sends an output signal.\n",
"\n",
"An **artificial neuron** contrary to biological ones, follows these steps:\n",
"1. **Takes inputs** (usually numbers between 0 and 1).\n",
"2. **Multiplies** each by a corresponding **weight** (representing the importance of that input).\n",
"3. **Adds a bias**, which shifts the result up or down.\n",
"4. **Applies an activation function**, which normalizes or squashes the output (commonly: **sigmoid**, **ReLU**, etc.).\n",
"5. **Returns the final output**, often a value between 0 and 1. \n",
"\n",
"---\n",
"\n",
"## Vocabulary / Key Components\n",
"\n",
"| Term | Meaning |\n",
"|----------|---------|\n",
"| **inputs** | List of input values (e.g., 8-bit binary numbers like `01001010`) |\n",
"| **weights** | Values associated with each input, controlling how much influence each input has |\n",
"| **bias** | A constant added to the weighted sum to adjust the output |\n",
"| **activation function** | A function like `sigmoid` that transforms the output into a bounded range |\n",
"\n",
"---\n",
"\n",
"## Minimal Neuron Implementation\n",
"\n",
"### Step 1 Initialization"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "7d9d6072",
"metadata": {},
"outputs": [],
"source": [
"import random\n",
"\n",
"# neuron class 1\n",
"class Neuron:\n",
" \"\"\"\n",
" z : linear combination of inputs and weights plus bias (pre-activation)\n",
" y : output of the activation function (sigmoid(z))\n",
" w : list of weights, one for each input\n",
" \"\"\"\n",
" def __init__(self, isize):\n",
" # number of inputs to this neuron\n",
" self.isize = isize\n",
" # importance to each input\n",
" self.weight = [random.uniform(-1, 1) for _ in range(self.isize)]\n",
" # importance of the neuron\n",
" self.bias = random.uniform(-1, 1)"
]
},
{
"cell_type": "markdown",
"id": "6dd28c51",
"metadata": {},
"source": [
"On their own, you can't do much yet, but they form a good starting point to illustrate how a neuron behaves: \n",
"it takes a input size as parameter, generates a corresponding list of random weights, and assigns a random bias."
]
},
{
"cell_type": "markdown",
"id": "0c47647c",
"metadata": {},
"source": [
"## Step 2 Activation Functions"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "ee0fdb39",
"metadata": {},
"outputs": [],
"source": [
"import math\n",
"\n",
"# transform all numbers between 0 and 1\n",
"def sigmoid(x):\n",
" return 1 / (1 + math.exp(-x))\n",
"\n",
"# sigmoid's derivation\n",
"def sigmoid_deriv(x): \n",
" y = sigmoid(x)\n",
" return y * (1 - y)"
]
},
{
"cell_type": "markdown",
"id": "79e011c2",
"metadata": {},
"source": [
"These functions are called activation functions. Their goal is to transform any raw values (which can be any number) into a more reasonable range, usually between 0 and 1. The most well-known ones are:\n",
"- sigmoid \n",
"- ReLU (Rectified Linear Unit)\n",
"- Tanh\n",
"\n",
"### Sigmoid Graphical Representation\n",
"![sigmoid](./res/sigmoid.png)"
]
},
{
"cell_type": "markdown",
"id": "1aff9ee6",
"metadata": {},
"source": [
"## Step 3 - Forward Pass Function"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "7ca39a42",
"metadata": {},
"outputs": [],
"source": [
"# neuron class 2\n",
"class Neuron:\n",
" \"\"\"\n",
" z : linear combination of inputs and weights plus bias (pre-activation)\n",
" y : output of the activation function (sigmoid(z))\n",
" w : list of weights, one for each input\n",
" \"\"\"\n",
" def __init__(self, isize):\n",
" # number of inputs to this neuron\n",
" self.isize = isize\n",
" # importance to each input\n",
" self.weight = [random.uniform(-1, 1) for _ in range(self.isize)]\n",
" # importance of the neuron\n",
" self.bias = random.uniform(-1, 1)\n",
"\n",
" def forward(self, x, activate=True):\n",
" \"\"\"\n",
" x : list of input values to the neuron\n",
" \"\"\"\n",
" # computes the weighted sum of inputs and add the bias\n",
" self.z = sum(w * xi for w, xi in zip(self.weight, x)) + self.bias\n",
" # normalize the output between 0 and 1 if activate\n",
" output = sigmoid(self.z) if activate else self.z\n",
"\n",
" return output"
]
},
{
"cell_type": "markdown",
"id": "3e7b79fa",
"metadata": {},
"source": [
"The `forward()` method simulates how a neuron proccesses its inputs:\n",
"1. **Weighted Sum and Bias** (z variable): \n",
" \n",
" Each input is multiplied by its corresponding weight, then all are summed and the bias added.\n",
" ```z = w1 * x1 + w2 * x2 + .... + bias```\n",
"\n",
"2. **Activation**: \n",
"\n",
" The z output is then passed through an **Activation function** (like sigmoid). This squashes the output between 1 and 0.\n",
" However, it can be disabled with `activate=False`. It's useful for **output neurons** in some tasks.\n",
"\n",
"3. **Returns the output**:\n",
"\n",
" The output has become the neuron's final output\n",
"\n",
"#### Test - Forward Pass"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "6709c5c7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Neuron output : 0.7539649973230405\n"
]
}
],
"source": [
"# 8 for 8 bits (1 Byte)\n",
"nbits: int = 8\n",
"neuron = Neuron(nbits)\n",
"inputs: list = [1, 0, 1, 0, 0, 1, 1, 0] \n",
"\n",
"output = neuron.forward(inputs)\n",
"print(\"Neuron output :\", output)"
]
},
{
"cell_type": "markdown",
"id": "5593a84a",
"metadata": {},
"source": [
"The test result is a bit random due to the randomly initialized weights and bias in each Neuron. None of the neurons has been trained for this input."
]
},
{
"cell_type": "markdown",
"id": "aa57ae8e",
"metadata": {},
"source": [
"# 3"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f6de25ea",
"metadata": {},
"outputs": [],
"source": [
"import math\n",
"import random\n",
"\n",
"# Neuron 3\n",
"class Neuron:\n",
" def __init__(self, isize: int) -> None:\n",
" self.isize = isize\n",
" self.weight = [random.uniform(0, 1) for _ in range(self.isize)]\n",
" self.bias = random.uniform(0, 1)\n",
"\n",
" def forward(self, inputs: list) -> float:\n",
" assert len(inputs) == self.isize, \"error: incorrect inputs number\"\n",
" total = sum(self.weight[i] * inputs[i] for i in range(self.isize)) + self.bias\n",
" return self.sigmoid(total)\n",
" \n",
" def sigmoid(x: float) -> float:\n",
" return 1/(1 + math.exp(-x))\n",
"\n",
" # target needs to be between 0 and 1\n",
" def train(self, inputs: list, target: float, learning_rate: float = 0.1):\n",
" z = sum(self.weight[i] * inputs[i] for i in range(self.isize)) + self.bias\n",
" output = self.sigmoid(z)\n",
"\n",
" error = output - target\n",
" d_sigmoid = output * (1 - output)\n",
" dz = error * d_sigmoid\n",
"\n",
" for i in range(self.isize):\n",
" self.weight[i] -= learning_rate * dz * inputs[i]\n",
"\n",
" self.bias -= learning_rate * dz\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}