❗The content presented here is sourced directly from Udacity platform. For comprehensive course details, including enrollment information, simply click on the 'Go to class' link on our website.
Updated in [June 30th, 2023]
What skills and knowledge will you acquire during this course?
The skills and knowledge that will be acquired during this course include understanding the concept of Unsupervised Learning and its application in analyzing data and identifying patterns. Students will learn about randomized optimization, clustering, and feature selection and transformation techniques used in Unsupervised Learning. The course will provide insights into how Unsupervised Learning approaches can be used to find structure in unlabeled data.
How does this course contribute to professional growth?
The course contributes to professional growth by providing knowledge and skills in analyzing data and identifying patterns. It focuses on various Unsupervised Learning approaches such as randomized optimization, clustering, and feature selection and transformation. This course is part of a graduate-level series on Machine Learning, which aims to enhance understanding and proficiency in Artificial Intelligence.
Is this course suitable for preparing further education?
The course "Machine Learning: Unsupervised Learning" focuses on using Unsupervised Learning approaches to find structure in unlabeled data. It covers topics such as randomized optimization, clustering, and feature selection and transformation. This course is part of a graduate-level series of 3 courses on Machine Learning, which is taught by Professor Charles Isbell from Georgia Tech and Professor Michael Littman from Brown University.
Course Syllabus
Randomized optimization
Optimization, randomized,Hill climbing,Random restart hill climbing,Simulated annealing,Annealing algorithm,Properties of simulated annealing,Genetic algorithms,GA skeleton,Crossover example,What have we learned,MIMIC,MIMIC: A probability model,MIMIC: Pseudo code,MIMIC: Estimating distributions,Finding dependency trees,Probability distributionClustering
Clustering and expectation maximization,Basic clustering problem,Single linkage clustering (SLC),Running time of SLC,Issues with SLC,K-means clustering,K-means in Euclidean space,K-means as optimization,Soft clustering,Maximum likelihood Gaussian,Expectation Maximization (EM),Impossibility theoremFeature Selection
Algorithms,Filtering and Wrapping,Speed,Searching,Relevance,Relevance vs. UsefulnessFeature Transformation
Feature Transformation,Words like Tesla,Principal Components Analysis,Independent Components Analysis,Cocktail Party Problem,Matrix,AlternativesInformation Theory
History -Sending a Message,Expected size of the message,Information between two variables,Mutual information,Two Independent Coins,Two Dependent Coins,Kullback Leibler Divergence