Machine Learning Studio

@PyMLstudio - 27 本の動画

チャンネル登録者数 5990人

I'm Vahid Mirjalili, a Senior Data Scientist with over five years of experience in Multimodal Machine Learning, Computer Vision, NLP, and Deep Learning. I ho...

最近の動画

Encoder-Decoder Architecture in Transformers 7:20

Encoder-Decoder Architecture in Transformers

Parameter Efficient Fine Tuning PEFT 13:51

Parameter Efficient Fine Tuning PEFT

Retrieval Augmented Generation (RAG) 9:10

Retrieval Augmented Generation (RAG)

Enhancing LLMs (an overview) 10:08

Enhancing LLMs (an overview)

FlashAttention: Accelerate LLM training 11:27

FlashAttention: Accelerate LLM training

An Overview of Object Recognition Tasks 28:09

An Overview of Object Recognition Tasks

Dataset Management with FiftyOne 55:40

Dataset Management with FiftyOne

OpenAI CLIP model explained 12:08

OpenAI CLIP model explained

DINO -- Self-supervised ViT 11:18

DINO -- Self-supervised ViT

Swin Transformer 9:23

Swin Transformer

Variants of ViT: DeiT  and T2T-ViT 12:02

Variants of ViT: DeiT and T2T-ViT

Vision Transformer (ViT) 11:10

Vision Transformer (ViT)

Evolution of Self-Attention in Vision 13:36

Evolution of Self-Attention in Vision

Relative Self-Attention Explained 9:09

Relative Self-Attention Explained

Self-Attention in Image Domain: Non-Local Module 8:57

Self-Attention in Image Domain: Non-Local Module

Introducing a new series on Vision Transformers 1:23

Introducing a new series on Vision Transformers

Linear Complexity in Attention Mechanism: A step-by-step implementation in PyTorch 27:08

Linear Complexity in Attention Mechanism: A step-by-step implementation in PyTorch

Efficient Self-Attention for Transformers 21:31

Efficient Self-Attention for Transformers

Variants of Multi-head attention: Multi-query (MQA) and Grouped-query attention (GQA) 8:13

Variants of Multi-head attention: Multi-query (MQA) and Grouped-query attention (GQA)

PostLN, PreLN and ResiDual Transformers 7:48

PostLN, PreLN and ResiDual Transformers

Transformer Architecture 8:11

Transformer Architecture

Top Optimizers for Neural Networks 29:00

Top Optimizers for Neural Networks

A Dive Into Multihead Attention, Self-Attention and Cross-Attention 9:57

A Dive Into Multihead Attention, Self-Attention and Cross-Attention

Self-Attention Using Scaled Dot-Product Approach 16:09

Self-Attention Using Scaled Dot-Product Approach

GPT-4 release: a 5-minute overview 5:17

GPT-4 release: a 5-minute overview

Matrix Multiplication Concept Explained 14:09

Matrix Multiplication Concept Explained

A Review of 10 Most Popular Activation Functions in Neural Networks 15:59

A Review of 10 Most Popular Activation Functions in Neural Networks

動画

トランスフォーマーにおけるエンコーダ・デコーダアーキテクチャ 7:20

トランスフォーマーにおけるエンコーダ・デコーダアーキテクチャ

955 回視聴 - 8 か月前

パラメータ効率の良い微調整PEFT 13:51

パラメータ効率の良い微調整PEFT

1193 回視聴 - 9 か月前

Retrieval Augmented Generation (RAG) 9:10

Retrieval Augmented Generation (RAG)

547 回視聴 - 1 年前

Enhancing LLMs (an overview) 10:08

Enhancing LLMs (an overview)

629 回視聴 - 1 年前

FlashAttention: LLMトレーニングの加速 11:27

FlashAttention: LLMトレーニングの加速

8320 回視聴 - 1 年前

An Overview of Object Recognition Tasks 28:09

An Overview of Object Recognition Tasks

480 回視聴 - 1 年前

Dataset Management with FiftyOne 55:40

Dataset Management with FiftyOne

366 回視聴 - 1 年前

OpenAI CLIPモデルの説明 12:08

OpenAI CLIPモデルの説明

2.4万 回視聴 - 1 年前

DINO -- Self-supervised ViT 11:18

DINO -- Self-supervised ViT

1579 回視聴 - 1 年前

スウィントランスフォーマー 9:23

スウィントランスフォーマー

9526 回視聴 - 1 年前

ViTの変異体:DeiTおよびT2T-ViT 12:02

ViTの変異体:DeiTおよびT2T-ViT

2110 回視聴 - 1 年前

Vision Transformer (ViT) 11:10

Vision Transformer (ViT)

3641 回視聴 - 1 年前