Sponsored Ad
Ad loading or blocked
Deep Learning (BAI701 ) unit one cover
Unit Overview
This unit introduces the basic ideas of Machine Learning and Neural Networks in a very simple way.It explains how machines learn from data, instead of following fixed rules written by humans.You will start from simple linear models and slowly move to neural networks.
For AKTU exams, this unit is very important because:
- Many questions are theory-based (definitions, explanations)
- Short notes and comparisons are frequently asked
- Concepts like Perceptron, Loss Function, Backpropagation appear often
If you understand this unit well, you can easily score in 5-mark and 10-mark questions.
Introduction to Machine Learning
What is Machine Learning
Machine Learning means teaching a computer to learn from data.Instead of giving exact rules, we give examples, and the machine finds patterns.The machine improves its performance with experience.
Real-life example: Email spam filter learns from past emails to decide spam or not spam.
Why Not Normal Programming
In normal programming, rules are written by humans.But some problems have too many rules or rules keep changing.Machine Learning handles such problems easily.
Example: Face recognition cannot be solved by fixed rules, so ML is used.
Simple Real-Life Example
A student predicts exam marks using past scores.Machine Learning does the same, but with large data and math.

Linear Models
Linear models make decisions using a straight line or plane.They are simple, fast, and easy to understand.
Perceptron
What is Perceptron
Perceptron is the simplest learning model.It works like a basic decision-maker.
How It Works
- Takes inputs
- Multiplies by weights
- Adds them
- Gives output as 0 or 1
Think of it like: A student deciding pass or fail based on marks.
Real-Life Example
Loan approval: approve (1) or reject (0).
Limitation
It cannot solve complex problems like XOR.It only works for linearly separable data.

Logistic Regression
What It Is
Logistic Regression is a classification model.It gives output as a probability between 0 and 1.
Why Probability is Useful
Probability shows confidence of prediction.It is better than just yes or no.
Real-Life Example
Chance of student passing exam: 0.8 means 80% chance.
Limitation
It still creates a straight decision boundary.Not good for very complex patterns.
Support Vector Machine (SVM)

What It Is
SVM is a powerful classification model.It separates data using a best possible line.
Meaning of Maximum Margin
Margin means distance between line and data points.SVM chooses the line with maximum safety gap.
Real-Life Example
Separating two groups of students clearly on a marks chart.
Limitation
Training can be slow for large data.Choosing the right settings is difficult.
Comparison of Linear Models
Feature,Perceptron,Logistic Regression,SVM Output Type,0 or 1,Probability (0–1),Class label Decision Boundary,Straight line,Straight line,Optimal margin line Learning Power,Very basic,Better than perceptron,Very strong Limitation,Cannot handle complex data,Still linear,Slow for big data
Explanation:Perceptron is the simplest.Logistic Regression adds probability.SVM gives the most accurate separation.
Introduction to Neural Networks
Why Linear Models Fail
Linear models cannot learn curves or complex shapes.Real-world data is usually non-linear.
What is a Neuron
A neuron is a small computing unit.
- Inputs: data values
- Weights: importance of inputs
- Bias: adjustment value
- Activation: decision function
Think of neuron as:A smart switch that turns ON or OFF.
What a Shallow Neural Network Computes
A shallow network has one hidden layer.It combines neurons to learn complex patterns.Even one hidden layer is very powerful.

Training a Neural Network
Training means teaching the network to improve answers step by step.
Loss Function
What Loss Means
Loss shows how wrong the prediction is.Higher loss means worse prediction.
Why Error is Needed
Without error, the model does not know what to improve.Loss acts like a report card.
Backpropagation
What It Means
Backpropagation means sending error backward.It updates weights to reduce loss.
Why Backward
Output error depends on earlier layers.So correction must move from output to input.
Stochastic Gradient Descent (SGD)
What SGD Means
SGD improves the model step by step.It uses small portions of data at a time.
Why Small Steps
Big steps may miss the best solution.Small steps move safely toward minimum error.
Neural Networks as Universal Function Approximators
Meaning
Neural Networks can learn any type of pattern.They can fit simple and complex curves.
Why One Hidden Layer is Powerful
Even one hidden layer can model any function.More layers just make learning easier.
Why This Matters
It explains why neural networks are used in:
- Image recognition
- Speech systems
- Self-driving cars
This topic is very important for exams.
AKTU Exam Answer Boost
5-Mark Answer Points
- Definition of Machine Learning with example
- Explanation of Perceptron working
- Loss Function meaning
- Backpropagation idea
- SGD concept
Important Keywords:Machine Learning, Linear Model, Loss, Weight Update, Backpropagation
10-Mark Answer Points
- Explain Machine Learning vs traditional programming
- Describe Perceptron, Logistic Regression, and SVM
- Draw neural network diagram
- Explain training steps
- Universal approximation theorem
One-Shot Revision Summary
| Concept | Explanation |
| Machine Learning | Computers learn from data instead of rules. |
| Perceptron | Basic binary decision model. |
| Logistic Regression | Gives probability output. |
| SVM | Finds best separating line with maximum margin. |
| Neural Network | Group of neurons working together. |
| Loss Function | Measures prediction error. |
| Backpropagation | Error correction method. |
| SGD | Learning using small steps. |
| Universal Approximation | Neural networks can learn any function. |
After studying this unit, the student should feel confident to attempt any AKTU question from this chapter.
Sponsored Ad
Ad loading or blocked




