{ "cells": [ { "cell_type": "markdown", "id": "9b3f1635", "metadata": {}, "source": [ "# Neural Network" ] }, { "cell_type": "markdown", "id": "478651c8", "metadata": {}, "source": [ "## What is a *Neuron* (artifical)\n", "\n", "First of all, **I'm not an Neurologist so i might say some nonsense, i only researched online**. \n", "\n", "An artifical *neuron* works similary to a biological *neuron* in the way it process information. In a brain, like yours, a *neuron* receives signals from other *neurons*, processes them and sends an *output*.\n", "\n", "An artifical *neuron* takes **multiple *inputs*** (such as numbers), applies updated values called **weights** to each *inputs*, adds a constant called **bias**, apply a specific function to normalize the value called **Activation function**, and then `returns` the *output* of the Activation function (such as: **sigmoid**, **ReLU**, etc...)." ] }, { "cell_type": "code", "execution_count": null, "id": "7d9d6072", "metadata": {}, "outputs": [], "source": [ "import random\n", "\n", "# Neuron 1\n", "class Neuron:\n", " def __init__(self, input_size: int) -> None:\n", " self.input_size = input_size\n", " self.weight = [random.uniform(0, 1) for _ in range(self.input_size)]\n", " self.bias = random.uniform(0, 1)" ] }, { "cell_type": "markdown", "id": "1aff9ee6", "metadata": {}, "source": [ "# 2" ] }, { "cell_type": "code", "execution_count": null, "id": "7ca39a42", "metadata": {}, "outputs": [], "source": [ "import math\n", "import random\n", "\n", "# Neuron 2\n", "class Neuron:\n", " def __init__(self, input_size: int) -> None:\n", " self.input_size = input_size\n", " self.weight = [random.uniform(0, 1) for _ in range(self.input_size)]\n", " self.bias = random.uniform(0, 1)\n", "\n", " def sigmoid(x: float) -> float:\n", " return 1/(1 + math.exp(-x))\n", " \n", " def forward(self, inputs: list) -> float:\n", " assert len(inputs) == self.input_size, \"error: misnumber inputs number\"\n", " total = sum(self.weight[i] * inputs[i] for i in range(self.isize)) + self.bias\n", " return self.sigmoid(total)" ] }, { "cell_type": "code", "execution_count": null, "id": "6709c5c7", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Neuron output : 0.9001175686881125\n" ] } ], "source": [ "# 8 for 8 bits (1 Byte)\n", "nbits: int = 8\n", "neuron = Neuron(nbits)\n", "inputs: list = [1, 0, 1, 0, 0, 1, 1, 0] \n", "\n", "output = neuron.forward(inputs)\n", "print(\"Neuron output :\", output)" ] }, { "cell_type": "markdown", "id": "aa57ae8e", "metadata": {}, "source": [ "# 3" ] }, { "cell_type": "code", "execution_count": null, "id": "f6de25ea", "metadata": {}, "outputs": [], "source": [ "import math\n", "import random\n", "\n", "class Neuron:\n", " def __init__(self, isize: int) -> None:\n", " self.isize = isize\n", " self.weight = [random.uniform(0, 1) for _ in range(self.isize)]\n", " self.bias = random.uniform(0, 1)\n", "\n", " def forward(self, inputs: list) -> float:\n", " assert len(inputs) == self.isize, \"error: incorrect inputs number\"\n", " total = sum(self.weight[i] * inputs[i] for i in range(self.isize)) + self.bias\n", " return self.sigmoid(total)\n", " \n", " def sigmoid(x: float) -> float:\n", " return 1/(1 + math.exp(-x))\n", "\n", " # target needs to be between 0 and 1\n", " def train(self, inputs: list, target: float, learning_rate: float = 0.1):\n", " z = sum(self.weight[i] * inputs[i] for i in range(self.isize)) + self.bias\n", " output = self.sigmoid(z)\n", "\n", " error = output - target\n", " d_sigmoid = output * (1 - output)\n", " dz = error * d_sigmoid\n", "\n", " for i in range(self.isize):\n", " self.weight[i] -= learning_rate * dz * inputs[i]\n", "\n", " self.bias -= learning_rate * dz\n" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.13.3" } }, "nbformat": 4, "nbformat_minor": 5 }