Neural Networks 101:
Simulating the Synapse
A Neural Network is a mathematical model inspired by the biological structure of the human brain. While the brain uses chemical signals and electrical pulses, an artificial neural network (ANN) uses numbers, linear algebra, and calculus to find patterns in data.
1. The Atom of AI: The Perceptron
The most basic unit of a neural network is the Perceptron. Think of it as a decision-making node. It takes several inputs, applies a weight to each, and produces a single output.
The Mathematical Process
Every input is multiplied by a Weight. These results are summed together, and a Bias is added. This final number is then passed through an activation function to determine if the neuron "fires."
2. Deep Layers & Information Flow
A single perceptron can only solve simple linear problems. To solve complex tasks (like identifying a face), we stack these neurons into Layers.
- Input Layer: Where the data (vectors) first enters the system.
- Hidden Layers: The "black box" where the magic happens. Each layer extracts higher-level features (e.g., Layer 1 finds edges, Layer 2 finds shapes, Layer 3 finds faces).
- Output Layer: The final decision (e.g., "This image is a cat").
3. Activation Functions: Adding Non-Linearity
If we only used linear math, the network would just be one giant linear regression, regardless of how many layers it had. To solve complex, non-linear problems, we use Activation Functions.