ball-blog
  • Welcome, I'm ball
  • Machine Learning - Basic
    • Entropy
    • Cross Entropy
    • KL-Divergence
    • Monte-Carlo Method
    • Variational Auto Encoder
    • SVM
    • Adam Optimizer
    • Batch Normalization
    • Tokenizer
    • Rotary Positional Encoding
    • Vector Quantized VAE(VQ-VAE)
    • DALL-E
    • Diffusion Model
    • Memory Layers at Scale
    • Chain-of-Thought
  • Einsum
  • Linear Algebra
    • Linear Transformation
    • Determinant
    • Eigen-Value Decomposition(EVD)
    • Singular-Value Decomposition(SVD)
  • AI Accelerator
    • CachedAttention
    • SGLang
    • CacheBlend
  • Reinforcement Learning
    • Markov
  • Policy-Improvement Algorithm
  • Machine Learning - Transformer
    • Attention is All you need
    • Why do we need a mask in Transformer
    • Linear Transformer
    • kr2en Translator using Tranformer
    • Segment Anything
    • MNIST, CIFAR10 Classifier using ViT
    • Finetuning PaliGemma using LoRA
    • LoRA: Low-Rank Adaptation
  • EGTR: Extracting Graph from Transformer for SGG
  • Machine Learning - Mamba
    • Function Space(Hilbert Space)
    • HIPPO Framework
    • Linear State Space Layer
    • S4(Structures Space for Sequence Model)
    • Parallel Scan Algorithm
    • Mamba Model
  • Computer System
    • Memory Ordering: Release/Acquire 1
    • Memory Ordering: Release/Acquire 2
    • BUDAlloc
    • Lock-free Hash Table
    • Address Sanitizer
  • App development
    • Bluetooth connection in linux
    • I use Bun!
    • Using Tanstack-query in Frontend
    • Deploying AI Service using EC2
  • Problem Solving
    • Bipartite Graph
    • Shortest Path Problem in Graph
    • Diameter of a Tree
  • Scribbles
Powered by GitBook
On this page
  • Summary
  • Functions as Vectors
  • Inner product in function space
  • Norm in function space
  • References

Was this helpful?

Edit on GitHub
  1. Machine Learning - Mamba

Function Space(Hilbert Space)

Last updated 3 months ago

Was this helpful?

Summary

Subspace consists of vectors. A function space is an approach to think functions as a vector, and defining a space consisting of functions.

Functions as Vectors

How can we think a function as a vector?

If we have a function, then we can get the points on the function. If we align all the points(infinite number of points) and make it as a vector, we can think a function as a vector!

f(x)→(1,5,2,4,3,3,4,4,4,5,...)f(x) \rightarrow(1, 5, 2, 4, 3, 3, 4, 4, 4, 5, ...)f(x)→(1,5,2,4,3,3,4,4,4,5,...)

Inner product in function space

How can we calculate the inner product of two functions in function space?

Originally in vector space, we calculate the inner product as follows:

v⋅w=(1,2)⋅(3,4)=1⋅3+2⋅4=3+8=11v \cdot w = (1, 2) \cdot (3, 4) =1 \cdot3+ 2 \cdot 4=3+8=11v⋅w=(1,2)⋅(3,4)=1⋅3+2⋅4=3+8=11

For probability measure μ\muμ, inner product of two function is

<f,g>μ=∫−∞∞f(x)g(x)dμ(x) \left< f, g\right>_\mu = \int_{-\infty }^{\infty } f(x) g(x)d\mu(x)⟨f,g⟩μ​=∫−∞∞​f(x)g(x)dμ(x)

You can easily derive this by thinking a function as a vector.

Norm in function space

How can we calculate the size of a function in function space?

It is same as calculating the norm in vector space. We are going to use the inner product definition.

In vector space, we calculate the inner product as follows:

norm(v)=v⋅vnorm(v)=\sqrt{v\cdot v}norm(v)=v⋅v​

As a result, norm in function space is as follows:

∣∣f∣∣L2(μ)=<f,f>μ12||f||_{L_2(\mu)}=\left < f, f \right >^{\frac{1}{2}}_\mu∣∣f∣∣L2​(μ)​=⟨f,f⟩μ21​​

References

[1]

[2]

https://m.blog.naver.com/choi_s_h/221749422119
https://velog.io/@oldboy818/14%EA%B0%95Function-Space-bnwz7ha1
Drawing
Drawing