ball-blog
  • Welcome, I'm ball
  • Machine Learning - Basic
    • Entropy
    • Cross Entropy
    • KL-Divergence
    • Monte-Carlo Method
    • Variational Auto Encoder
    • SVM
    • Adam Optimizer
    • Batch Normalization
    • Tokenizer
    • Rotary Positional Encoding
    • Vector Quantized VAE(VQ-VAE)
    • DALL-E
    • Diffusion Model
    • Memory Layers at Scale
    • Chain-of-Thought
  • Einsum
  • Linear Algebra
    • Linear Transformation
    • Determinant
    • Eigen-Value Decomposition(EVD)
    • Singular-Value Decomposition(SVD)
  • AI Accelerator
    • CachedAttention
    • SGLang
    • CacheBlend
  • Reinforcement Learning
    • Markov
  • Policy-Improvement Algorithm
  • Machine Learning - Transformer
    • Attention is All you need
    • Why do we need a mask in Transformer
    • Linear Transformer
    • kr2en Translator using Tranformer
    • Segment Anything
    • MNIST, CIFAR10 Classifier using ViT
    • Finetuning PaliGemma using LoRA
    • LoRA: Low-Rank Adaptation
  • EGTR: Extracting Graph from Transformer for SGG
  • Machine Learning - Mamba
    • Function Space(Hilbert Space)
    • HIPPO Framework
    • Linear State Space Layer
    • S4(Structures Space for Sequence Model)
    • Parallel Scan Algorithm
    • Mamba Model
  • Computer System
    • Memory Ordering: Release/Acquire 1
    • Memory Ordering: Release/Acquire 2
    • BUDAlloc
    • Lock-free Hash Table
    • Address Sanitizer
  • App development
    • Bluetooth connection in linux
    • I use Bun!
    • Using Tanstack-query in Frontend
    • Deploying AI Service using EC2
  • Problem Solving
    • Bipartite Graph
    • Shortest Path Problem in Graph
    • Diameter of a Tree
  • Scribbles
Powered by GitBook
On this page
  • Summary
  • Linear Transformation
  • Viewing as changing input vector
  • Viewing as changing basis vector
  • References

Was this helpful?

Edit on GitHub
  1. Linear Algebra

Linear Transformation

Summary

As you deal with vectors, you may heard of the term Linear Transformation.

Linear Transformationis a process of changing the input vector. Actually, we can view it in two kinds: Changing the input Vector, Changing the Basis

Linear Transformation

Linear Transformation LLL has following rules:

L(v⃗+w⃗)=L(v⃗)+L(w⃗) ...(1) L(c v⃗)=c L(v⃗) ...(2)L(\vec{v}+\vec{w})=L(\vec{v})+L(\vec{w}) \ ...(1) \\~\\ L(c \ \vec{v}) = c \ L(\vec{v}) \ ...(2)L(v+w)=L(v)+L(w) ...(1) L(c v)=c L(v) ...(2)

In 2D space, linear transformation can be expressed in 2×22\times2 2×2 matrix. Let's say b⃗T\vec{b}^TbT is basis vectors, LLL is 2×22\times 2 2×2 linear transformation Matrix, and v⃗\vec{v}v as input vector.

Then we can express linear transformation as following:

b⃗T⋅v⃗⇒b⃗T⋅L⋅v⃗ ...(3)\vec{b}^T \cdot \vec{v} \Rightarrow \vec{b}^T \cdot L \cdot \vec{v} \ ...(3)bT⋅v⇒bT⋅L⋅v ...(3)

By applying the linear transformation matrix in the middle of basis vectors and input vector, it results linear transformation!

We expressed linear transformation in the equation (3). Since there are three terms ( b⃗T,L,v⃗\vec{b}^T, L, \vec{v}bT,L,v ), there are two kinds of order-of-calculation.

  1. Calculating L⋅v⃗L \cdot \vec{v}L⋅v first, and then dot-product with v⃗\vec{v}v. - viewing as changing the input vector

  2. Calculating b⃗T⋅L\vec{b}^T \cdot LbT⋅L first, and then dot-product with v⃗\vec{v}v. - viewing as changing the basis vector

Let's take a look at two different perspectives.

Viewing as changing input vector

If we calculate L⋅v⃗L \cdot \vec{v}L⋅v first, we are viewing linear transformation as changing the input vector without changing the basis vector.

Viewing as changing basis vector

If we calculate L⋅v⃗L \cdot \vec{v}L⋅v first, we are viewing linear transformation as changing the basis vector without changing the input vector.

I highly recommend this video. It explains linear transformation with good graphical materials.

References

Last updated 2 months ago

Was this helpful?

[1]

https://youtu.be/kYB8IZa5AuE?si=km9aGBH-lZWzOUoN
3Blue1Brown Linear Transformation Video