Skip to content

Amey-Thakur/MACHINE--LEARNING

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

142 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

University of Windsor

Machine Learning

ELEC 8900 · Semester III · MEng Computer Engineering

License: CC BY 4.0 University Program Curated by

A comprehensive academic archive for Machine Learning (ELEC 8900), documenting technical proficiency in supervised and unsupervised learning, neural networks, and reinforcement learning within the Master of Engineering program.


Overview  ·  Contents  ·  Reference Books  ·  Personal Preparation  ·  Assignments  ·  DataCamp  ·  In-Class Presentation  ·  Project  ·  Lecture Notes  ·  Syllabus  ·  Usage Guidelines  ·  License  ·  About  ·  Acknowledgments


Overview

Machine Learning (ELEC 8900) is a specialized graduate course in the Master of Engineering (MEng) program at the University of Windsor. This course introduces machine learning, covering fundamental concepts, techniques, and algorithms. It explores supervised learning methods including linear regression, logistic regression, multiclass classification, neural networks (CNN, RNN, FNN, deep learning), and decision trees (bias-variance decomposition). The unsupervised learning section covers probabilistic models, principle component analysis, K-Means, EM algorithm, and provides an overview of reinforcement learning.

Course Objectives

The curriculum encompasses several key machine learning domains:

  • Supervised Learning: Mastering regression and classification techniques to predict continuous variables and categorize data.
  • Neural Networks: Designing and training deep learning models, including CNNs and RNNs, for complex pattern recognition.
  • Unsupervised Learning: Implementing clustering algorithms (K-Means, EM) and dimensionality reduction (PCA) for data analysis.
  • Reinforcement Learning: Understanding the foundations of agent-based learning, Markov decision processes, and exploration-exploitation trade-offs.
  • Applied Engineering: Applying theoretical concepts to real-world datasets through rigorous programming projects.

Repository Purpose

This repository represents a curated collection of study materials, reference books, supplemental resources, assignment reports, course projects, and technical presentations. The primary motivation for creating and maintaining this archive is simple yet profound: to preserve knowledge for continuous learning and future reference.

As the field of Artificial Intelligence evolves, the fundamental principles remain the bedrock of modern engineering. This repository serves as my intellectual reference point: a resource I can return to for reviewing algorithms, refreshing theoretical concepts, and strengthening technical understanding.

Why this repository exists:

  • Knowledge Preservation: To maintain organized access to comprehensive study materials beyond the classroom.
  • Continuous Learning: To support lifelong learning by enabling easy revisitation of fundamental Machine Learning principles.
  • Academic Documentation: To authentically document my learning journey through Machine Learning.
  • Community Contribution: To share these resources with students and learners who may benefit from them.

Note

All materials were created, compiled, and organized by me during the Fall 2023 semester as part of my MEng degree requirements.


Repository Contents

Reference Books

This collection includes comprehensive reference materials covering all major topics:

# Resource Focus Area
1 Learning From Data - Abu-Mostafa, Magdon-Ismail, Lin Mathematical foundations of learning, VC dimension, and regularization.
2 Pattern Recognition and Machine Learning - Bishop Bayesian inference and probabilistic graphical models.
3 The Elements of Statistical Learning - Hastie, Tibshirani, Friedman Comprehensive coverage of supervised and unsupervised learning algorithms.
4 Reinforcement Learning: An Introduction - Sutton, Barto The definitive guide to RL algorithms and theory.
5 Information Theory, Inference, and Learning Algorithms - MacKay Deep dive into information theory and neural networks.
6 Bayesian Reasoning and Machine Learning - Barber Graphical models and Bayesian methods for machine learning.

Personal Preparation

Academic roadmap and administrative records for the Fall 2023 session:

# Resource Description
1 Course Syllabus Official course outcomes and assessment specifications
2 MEng Class Schedule Enrollment record and pedagogical timeline
3 Announcements Archival log of course announcements and directives

Assignments

Verified records of practical skill acquisition and academic assessments:

# Assignment Description
1 Multiple Linear Regression Application of multiple linear regression techniques for predictive modeling.
2 DataCamp Certifications (Combined) Comprehensive portfolio of all 5 completed DataCamp course certificates.

DataCamp Certifications

Industry-recognized certifications in Machine Learning and Data Science:

Supervised Learning with scikit-learn

Classification and regression algorithms using Python.

Supervised Learning with scikit-learn

Unsupervised Learning in Python

Clustering (K-Means) and dimensionality reduction (PCA).

Unsupervised Learning in Python

Linear Classifiers in Python

Logistic Regression and Support Vector Machines (SVMs).

Linear Classifiers in Python

Preprocessing for Machine Learning in Python

Feature engineering, scaling, and data preparation pipelines.

Preprocessing for Machine Learning in Python

Introduction to Deep Learning in Python

Neural networks and deep learning architecture fundamentals.

Introduction to Deep Learning in Python


In-Class Presentation

A detailed record of the technical presentation on regression analysis delivered during the semester.

Topic Date Authors Status

Authors

Amey Thakur
Amey Thakur

ORCID

Important

🤝🏻 Special Acknowledgement

Special thanks to Jithin Gijo Varghese and Ritika Agarwal for their meaningful contributions, guidance, and support that helped shape this work.

Technical Synthesis: Quantifying Academic Success

This module bridges the gap between theoretical probability and applied data science. By analyzing the Hours Studied vs. Grades Received dataset, we move beyond simple correlation to establish a statistically significant predictive framework. The research explores how computational models can "learn" from a limited set of observations to generate a high-fidelity line of best fit, demonstrating the transition from raw data collection to professional-grade statistical forecasting.

Tip

Variable Isolation: In high-dimensional research, identifying the true "drivers" of an outcome requires more than simple observation. Multiple Linear Regression serves as a technical filter, allowing researchers to mathematically isolate the weight of each individual predictor. This process ensures that our models capture the unique contribution of every input, providing a clear roadmap for navigating complex, multifaceted data systems.

Multiple Linear Regression Animation

This visualization demonstrates the convergence of a Multiple Linear Regression model using an iterative Gradient Descent optimization algorithm. By processing two independent variables ($x_1, x_2$) against a dependent target ($y$), the animation illustrates how the model's coefficients ($w_1, w_2$) and bias ($b$) are dynamically adjusted to minimize the Mean Squared Error (MSE). In this 3D high-dimensional space, the optimization path represents the pursuit of a global minimum on the loss surface, effectively finding the 'plane of best fit' that captures the underlying statistical relationship within the dataset. This computational process is fundamental to predictive modeling, transforming raw inputs into a mathematically verified framework for future forecasting.

Multiple Linear Regression (Group Presentation) · September 29, 2023

Note

Academic Structure: This presentation and report explore the mathematical foundations and practical applications of Multiple Linear Regression, demonstrating the ability to analyze relationships between multiple independent variables and a dependent variable.

# Resource Category Description
1 Presentation (Version 1) Presentation Original technical research slides (Initial Version)
2 Presentation Presentation Final peer-reviewed research slides (Final Version)
3 Presentation Notes Documentation Technical speaker notes and delivery guidelines for the research presentation
4 Jupyter Notebook Notebook Computational implementation of the regression model
5 Visualization (MP4) Video High-fidelity MP4 animation of the Multiple Linear Regression model
6 Visualization (GIF) Animation Lightweight GIF animation for rapid scholarly review
7 Academic Template Template Standardized scholarly presentation framework

Machine Learning Project

Adapting Pre-trained Diffusion Models for Zero-Shot Text-to-Video Synthesis

Project Stack Status YouTube

Authors

Amey Thakur
Amey Thakur

ORCID

Important

🤝🏻 Special Acknowledgement

Special thanks to Jithin Gijo Varghese and Ritika Agarwal for their meaningful contributions, guidance, and support that helped shape this work.

Project Overview

This study investigates and implements the Text2Video-Zero approach, enabling the generation of temporally coherent videos from text prompts without the need for large-scale video model training. The implementation focuses on modifying specific Self-Attention mechanisms within pre-trained diffusion models to preserve identity and background consistency across frames. The final system delivers a complete end-to-end pipeline, ranging from Tokenization and Embedding to Video Generation, deployed via a reactive web interface.

Tip

Zero-Shot Synthesis represents a transformative shift in Generative AI; it allows the creation of dynamic video content by adapting pre-existing image models rather than requiring expensive, large-scale video training. This approach makes high-fidelity motion synthesis more accessible by focusing on temporal consistency: ensuring that characters and backgrounds remain stable across every frame.

Resources

# Milestone Date
1 Project Proposal October 01, 2023
2 Project Presentation November 22, 2023
3 Final Project Report November 19, 2023
4 Video Demonstration November 19, 2023
5 YouTube Demonstration November 19, 2023

Lecture Notes

A comprehensive archival log documenting pedagogical discourse across fourteen weeks, including weekly slides, applied research presentations, and technical resources for the Fall 2023 session.

Tip

Machine Learning is not merely about algorithms; it is the bridge between data and intelligent decision-making. Every module below focuses on the critical translation from Theoretical Models to Applied Systems, enabling the design and verification of complex learning architectures.

# Week Date Topic/Activity Lecture Slides
1 Week 01 September 08, 2023 Introduction to Machine Learning View
2 Week 02 September 15, 2023 Data and its processing in Machine Learning View
3 Week 03 September 22, 2023 Supervised Learning View
4 Week 04 September 29, 2023 Supervised Learning (Linear Methods for Regression, Logistic Regression, Multiclass Classification)
In-class Assignment Presentations
5 Week 05 October 06, 2023 Decision Trees, Random Forest
In-class Assignment Presentations
6 Week 06 October 13, 2023 No Classes – Reading Week
7 Week 07 October 20, 2023 Neural Networks
In-class Assignment Presentations
8 Week 08 October 27, 2023 Neural Networks: Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs)
In-class Assignment Presentations
9 Week 09 November 03, 2023 Unsupervised Learning: Generative Adversarial Networks (GANs), K-means, and Expectation Maximization (EM) Algorithm
In-class Assignment Presentations
10 Week 10 November 10, 2023 K-means Clustering, Expectation Maximization (EM) Algorithm (Fuzzy/Spectral/Hierarchical Clustering)
In-class Assignment Presentations
11 Week 11 November 17, 2023 Dimensionality Reduction: Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Reinforcement Learning (RL)
In-class Assignment Presentations
12 Week 12 November 24, 2023 Machine Learning Project Presentations
13 Week 13 December 01, 2023 Reinforcement Learning (RL) & Course Wrap-Up
14 Week 14-15 December 09-20, 2023 Final Exam

In-Class Assignment Presentations

A granular record of peer-led technical research presentations and computational case studies conducted during the Fall 2023 session.

Note

These peer-led presentations form an essential part of the course curriculum, where student-driven research bridges the gap between machine learning theory and applied computational modeling.

# Week Date Topics Presentations
1 Week 04 September 29, 2023
  1. Gradient Descent algorithm and its variants
  2. Multiple Linear Regression – Amey Thakur
  3. Multiple Linear Regression – Notebook & Dataset
  4. Polynomial regression
  1. View
  2. View
     
     
  3. View
  4. View
2 Week 05 October 06, 2023
  1. Logistic Regression
  2. Decision Tree Regression
  3. Decision Tree Regression
  4. Random forest regression
  1. View
  2. View
  3. View
  4. View
3 Week 07 October 20, 2023
  1. Naive Bayes Classifiers
  2. Naive Bayes Classifiers
  3. Non-Linear Support Vector Machines
  4. Support Vector Machines
  5. Ensemble, Voting and Bagging Classifiers
  1. View
  2. View
  3. View
  4. View
  5. View
4 Week 08 October 27, 2023
  1. Convolutional Neural Networks (CNNs)
  2. Convolutional Neural Networks (CNNs)
  3. Recurrent Neural Networks (RNNs)
  4. Recurrent Neural Networks (RNNs)
  1. View
  2. View
  3. View
  4. View
5 Week 09 November 03, 2023
  1. Generative Adversarial Networks (GANs)
  2. K-mean clustering
  3. Expectation Maximization (EM) Algorithm
  4. Mean-Shift Clustering
  5. Fuzzy Clustering – Notes
  1. View
  2. View
  3. View
  4. View
  5. View
6 Week 10 November 10, 2023
  1. Spectral Clustering
  2. Hierarchical Clustering
  3. DBSCAN – Density Based Clustering
  4. Dimensionality reduction: Principle Component Analysis
  5. Dimensionality reduction: Principle Component Analysis
  6. Dimensionality reduction: Principle Component Analysis
  1. View
  2. View
  3. View
  4. View
  5. View
  6. View
7 Week 11 November 17, 2023
  1. Dimensionality reduction: Linear Discriminant Analysis
  2. Reinforcement Learning: Q-Learning
  3. Reinforcement Learning: Q-Learning
  4. Reinforcement Learning: Policy Gradient methods
  5. Reinforcement Learning: Policy Gradient methods
  1. View
  2. View
  3. View
  4. View
  5. View

Syllabus

Official ELEC 8900 Syllabus
Complete graduate-level syllabus document for the Fall 2023 session, including detailed course outcomes, assessment criteria, and module specifications for Machine Learning and Pattern Recognition.

Important

Always verify the latest syllabus details with the official University of Windsor academic portal, as curriculum specifications for machine learning may undergo instructor-led adaptations across different sessions.


Usage Guidelines

This repository is openly shared to support learning and knowledge exchange across the academic community.

For Students
Use these resources as templates for project proposals, reference materials for learning theory, and examples of scholarly documentation. All content is organized for self-paced learning.

For Educators
These materials may serve as curriculum references, sample project benchmarks, or supplementary instructional content in machine learning. Attribution is appreciated when utilizing content.

For Researchers
The project reports and architectural documentation may provide insights into machine learning methodologies and professional engineering documentation structuring.


License

This repository and all linked academic content are made available under the Creative Commons Attribution 4.0 International License (CC BY 4.0). See the LICENSE file for complete terms.

Note

Summary: You are free to share and adapt this content for any purpose, even commercially, as long as you provide appropriate attribution to the original author.


About This Repository

Created & Maintained by: Amey Thakur
Academic Journey: Master of Engineering in Computer Engineering (2023-2024)
Institution: University of Windsor, Windsor, Ontario
Faculty: Faculty of Engineering

This repository represents a comprehensive collection of study materials, reference books, supplemental resources, weekly lecture archives, and project reports curated during my academic journey. All content has been carefully organized and documented to serve as a valuable resource for students pursuing Machine Learning.

Connect: GitHub  ·  LinkedIn  ·  ORCID

Acknowledgments

Grateful acknowledgment to Dr. Yasser M. Alginahi for his exceptional instruction in Machine Learning, which played a pivotal role in shaping my analytical understanding of the subject. His clear and disciplined approach, along with his thorough explanation of complex algorithms and neural networks, made the subject both accessible and engaging. His distinguished expertise and commitment to academic excellence in Machine Learning are sincerely appreciated.

Grateful acknowledgment to my Major Project teammates, Jithin Gijo Varghese and Ritika Agarwal, for their collaborative excellence and shared commitment throughout the semester. Our collective efforts in synthesizing complex datasets, developing rigorous machine learning architectures, and authoring comprehensive technical reports were fundamental to the successful realization of our objectives. This partnership not only strengthened the analytical depth of our shared deliverables but also provided invaluable insights into the dynamics of high-performance engineering teamwork.

Grateful acknowledgment to Jason Horn, Writing Support Desk, University of Windsor, for his distinguished mentorship and scholarly guidance. His analytical feedback and methodological rigor were instrumental in refining the intellectual depth and professional caliber of my academic work. His dedication stands as a testament to the pursuit of academic excellence and professional integrity.

Special thanks to the mentors and peers whose encouragement, discussions, and support contributed meaningfully to this learning experience.



Computer Engineering (M.Eng.) - University of Windsor

Semester-wise curriculum, laboratories, projects, and academic notes.