Join Now
Join Now
Ask to Arya

Deep Learning (BAI701 ) unit one cover

Dec 29, 2025, 9:09 PM -Admin

Unit Overview

This unit introduces the basic ideas of Machine Learning and Neural Networks in a very simple way.It explains how machines learn from data, instead of following fixed rules written by humans.You will start from simple linear models and slowly move to neural networks.

For AKTU exams, this unit is very important because:

  1. Many questions are theory-based (definitions, explanations)
  2. Short notes and comparisons are frequently asked
  3. Concepts like Perceptron, Loss Function, Backpropagation appear often

If you understand this unit well, you can easily score in 5-mark and 10-mark questions.

Introduction to Machine Learning

What is Machine Learning

Machine Learning means teaching a computer to learn from data.Instead of giving exact rules, we give examples, and the machine finds patterns.The machine improves its performance with experience.

Real-life example: Email spam filter learns from past emails to decide spam or not spam.

Why Not Normal Programming

In normal programming, rules are written by humans.But some problems have too many rules or rules keep changing.Machine Learning handles such problems easily.

Example: Face recognition cannot be solved by fixed rules, so ML is used.

Simple Real-Life Example

A student predicts exam marks using past scores.Machine Learning does the same, but with large data and math.

Traditional Programming Vs Machine Learning
Traditional Programming Vs Machine Learning

Linear Models

Linear models make decisions using a straight line or plane.They are simple, fast, and easy to understand.

Perceptron

What is Perceptron

Perceptron is the simplest learning model.It works like a basic decision-maker.

How It Works

  1. Takes inputs
  2. Multiplies by weights
  3. Adds them
  4. Gives output as 0 or 1

Think of it like: A student deciding pass or fail based on marks.

Real-Life Example

Loan approval: approve (1) or reject (0).

Limitation

It cannot solve complex problems like XOR.It only works for linearly separable data.

Perceptron
Perceptron

Logistic Regression

What It Is

Logistic Regression is a classification model.It gives output as a probability between 0 and 1.

Why Probability is Useful

Probability shows confidence of prediction.It is better than just yes or no.

Real-Life Example

Chance of student passing exam: 0.8 means 80% chance.

Limitation

It still creates a straight decision boundary.Not good for very complex patterns.

Support Vector Machine (SVM)

Showing two groups of points separated by a wide margin line.
Showing two groups of points separated by a wide margin line.

What It Is

SVM is a powerful classification model.It separates data using a best possible line.

Meaning of Maximum Margin

Margin means distance between line and data points.SVM chooses the line with maximum safety gap.

Real-Life Example

Separating two groups of students clearly on a marks chart.

Limitation

Training can be slow for large data.Choosing the right settings is difficult.

Comparison of Linear Models

Feature,Perceptron,Logistic Regression,SVM Output Type,0 or 1,Probability (0–1),Class label Decision Boundary,Straight line,Straight line,Optimal margin line Learning Power,Very basic,Better than perceptron,Very strong Limitation,Cannot handle complex data,Still linear,Slow for big data

Explanation:Perceptron is the simplest.Logistic Regression adds probability.SVM gives the most accurate separation.

Introduction to Neural Networks

Why Linear Models Fail

Linear models cannot learn curves or complex shapes.Real-world data is usually non-linear.

What is a Neuron

A neuron is a small computing unit.

  1. Inputs: data values
  2. Weights: importance of inputs
  3. Bias: adjustment value
  4. Activation: decision function

Think of neuron as:A smart switch that turns ON or OFF.

What a Shallow Neural Network Computes

A shallow network has one hidden layer.It combines neurons to learn complex patterns.Even one hidden layer is very powerful.

Neural network with one hidden layer
Neural network with one hidden layer

Training a Neural Network

Training means teaching the network to improve answers step by step.

Loss Function

What Loss Means

Loss shows how wrong the prediction is.Higher loss means worse prediction.

Why Error is Needed

Without error, the model does not know what to improve.Loss acts like a report card.

Backpropagation

What It Means

Backpropagation means sending error backward.It updates weights to reduce loss.

Why Backward

Output error depends on earlier layers.So correction must move from output to input.

Stochastic Gradient Descent (SGD)

What SGD Means

SGD improves the model step by step.It uses small portions of data at a time.

Why Small Steps

Big steps may miss the best solution.Small steps move safely toward minimum error.

Neural Networks as Universal Function Approximators

Meaning

Neural Networks can learn any type of pattern.They can fit simple and complex curves.

Why One Hidden Layer is Powerful

Even one hidden layer can model any function.More layers just make learning easier.

Why This Matters

It explains why neural networks are used in:

  1. Image recognition
  2. Speech systems
  3. Self-driving cars

This topic is very important for exams.

AKTU Exam Answer Boost

5-Mark Answer Points

  1. Definition of Machine Learning with example
  2. Explanation of Perceptron working
  3. Loss Function meaning
  4. Backpropagation idea
  5. SGD concept

Important Keywords:Machine Learning, Linear Model, Loss, Weight Update, Backpropagation

10-Mark Answer Points

  1. Explain Machine Learning vs traditional programming
  2. Describe Perceptron, Logistic Regression, and SVM
  3. Draw neural network diagram
  4. Explain training steps
  5. Universal approximation theorem

One-Shot Revision Summary

ConceptExplanation
Machine LearningComputers learn from data instead of rules.
PerceptronBasic binary decision model.
Logistic RegressionGives probability output.
SVMFinds best separating line with maximum margin.
Neural NetworkGroup of neurons working together.
Loss FunctionMeasures prediction error.
BackpropagationError correction method.
SGDLearning using small steps.
Universal ApproximationNeural networks can learn any function.

After studying this unit, the student should feel confident to attempt any AKTU question from this chapter.

Tags: #deep learning#unit 1#unit#aktu

You may also like: