* . *
  • About
  • Advertise
  • Privacy & Policy
  • Contact
Sunday, July 6, 2025
Earth-News
  • Home
  • Business
  • Entertainment
    Magicians and Battlebots light up Las Vegas entertainment scene – KSNV

    Magicians and Battlebots Take Las Vegas Entertainment by Storm

    Max-Matching Entertainments & Longhua District form partnership for new entertainment complex – Blooloop

    Max-Matching Entertainments and Longhua District Unite to Launch Thrilling New Entertainment Complex

    Kennedy Publishing, MGA Entertainment Launch Yummiland Magazine – License Global

    Kennedy Publishing, MGA Entertainment Launch Yummiland Magazine – License Global

    MAY HER SOUL REST IN PEACE 🙏 Veteran entertainment columnist and talent manager Lolit Solis has passed away. She was 78 years old. https://tinyurl.com/6kumarkx | LatestChika.com – Facebook

    Beloved Entertainment Icon Lolit Solis Passes Away at 78 – A Life Remembered with Love and Respect 🙏

    Neil Young Plays Rare Full-Band ‘Ambulance Blues’ With The Chrome Hearts – Yahoo

    Neil Young Stuns Fans with Rare Full-Band Performance of ‘Ambulance Blues’ Alongside The Chrome Hearts

    BTS Announce Their Big Return and Yes, They Already Have Some Major Plans in the Works – Yahoo

    BTS Announce Their Big Return and Yes, They Already Have Some Major Plans in the Works – Yahoo

  • General
  • Health
  • News

    Cracking the Code: Why China’s Economic Challenges Aren’t Shaking Markets, Unlike America’s” – Bloomberg

    Trump’s Narrow Window to Spread the Truth About Harris

    Trump’s Narrow Window to Spread the Truth About Harris

    Israel-Gaza war live updates: Hamas leader Ismail Haniyeh assassinated in Iran, group says

    Israel-Gaza war live updates: Hamas leader Ismail Haniyeh assassinated in Iran, group says

    PAP Boss to Niger Delta Youths, Stay Away from the Protest

    PAP Boss to Niger Delta Youths, Stay Away from the Protest

    Court Restricts Protests In Lagos To Freedom, Peace Park

    Court Restricts Protests In Lagos To Freedom, Peace Park

    Fans React to Jazz Jennings’ Inspiring Weight Loss Journey

    Fans React to Jazz Jennings’ Inspiring Weight Loss Journey

    Trending Tags

    • Trump Inauguration
    • United Stated
    • White House
    • Market Stories
    • Election Results
  • Science
  • Sports
  • Technology
    EIFO invests $5 million in D3, the Ukraine-focused defence technology venture fund – sUAS News

    EIFO Pledges $5 Million to Supercharge Ukraine-Focused Defense Technology Fund

    New Technology for Water Efficiency and Working with Mexico on Screwworm – AG INFORMATION NETWORK OF THE WEST

    Revolutionary Water Efficiency Technology and Cross-Border Collaboration to Defeat Screwworm

    Environmental cognitive distance, R&D capability distance, and supply chain green technology innovation – Nature

    Bridging Gaps: How Environmental and R&D Differences Drive Green Technology Innovation in Supply Chains

    LG Innotek CEO Moon Hyuksoo: “Our Next-gen Substrate Technology Will Change the Industry Paradigm” – TechPowerUp

    LG Innotek CEO Moon Hyuksoo: “Our Next-Gen Substrate Technology Will Revolutionize the Industry” Revolutionizing the Future: LG Innotek’s CEO Unveils Game-Changing Next-Gen Substrate Technology

    Inspira Technologies Secures Landmark $22.5M Deal: Major Revenue Breakthrough After FDA Clearance – Stock Titan

    Inspira Technologies Secures Landmark $22.5M Deal: Major Revenue Breakthrough After FDA Clearance – Stock Titan

    Meiwu Technology Company Limited and Shenzhen Zhinuo – GlobeNewswire

    Meiwu Technology Company Limited and Shenzhen Zhinuo – GlobeNewswire

    Trending Tags

    • Nintendo Switch
    • CES 2017
    • Playstation 4 Pro
    • Mark Zuckerberg
No Result
View All Result
  • Home
  • Business
  • Entertainment
    Magicians and Battlebots light up Las Vegas entertainment scene – KSNV

    Magicians and Battlebots Take Las Vegas Entertainment by Storm

    Max-Matching Entertainments & Longhua District form partnership for new entertainment complex – Blooloop

    Max-Matching Entertainments and Longhua District Unite to Launch Thrilling New Entertainment Complex

    Kennedy Publishing, MGA Entertainment Launch Yummiland Magazine – License Global

    Kennedy Publishing, MGA Entertainment Launch Yummiland Magazine – License Global

    MAY HER SOUL REST IN PEACE 🙏 Veteran entertainment columnist and talent manager Lolit Solis has passed away. She was 78 years old. https://tinyurl.com/6kumarkx | LatestChika.com – Facebook

    Beloved Entertainment Icon Lolit Solis Passes Away at 78 – A Life Remembered with Love and Respect 🙏

    Neil Young Plays Rare Full-Band ‘Ambulance Blues’ With The Chrome Hearts – Yahoo

    Neil Young Stuns Fans with Rare Full-Band Performance of ‘Ambulance Blues’ Alongside The Chrome Hearts

    BTS Announce Their Big Return and Yes, They Already Have Some Major Plans in the Works – Yahoo

    BTS Announce Their Big Return and Yes, They Already Have Some Major Plans in the Works – Yahoo

  • General
  • Health
  • News

    Cracking the Code: Why China’s Economic Challenges Aren’t Shaking Markets, Unlike America’s” – Bloomberg

    Trump’s Narrow Window to Spread the Truth About Harris

    Trump’s Narrow Window to Spread the Truth About Harris

    Israel-Gaza war live updates: Hamas leader Ismail Haniyeh assassinated in Iran, group says

    Israel-Gaza war live updates: Hamas leader Ismail Haniyeh assassinated in Iran, group says

    PAP Boss to Niger Delta Youths, Stay Away from the Protest

    PAP Boss to Niger Delta Youths, Stay Away from the Protest

    Court Restricts Protests In Lagos To Freedom, Peace Park

    Court Restricts Protests In Lagos To Freedom, Peace Park

    Fans React to Jazz Jennings’ Inspiring Weight Loss Journey

    Fans React to Jazz Jennings’ Inspiring Weight Loss Journey

    Trending Tags

    • Trump Inauguration
    • United Stated
    • White House
    • Market Stories
    • Election Results
  • Science
  • Sports
  • Technology
    EIFO invests $5 million in D3, the Ukraine-focused defence technology venture fund – sUAS News

    EIFO Pledges $5 Million to Supercharge Ukraine-Focused Defense Technology Fund

    New Technology for Water Efficiency and Working with Mexico on Screwworm – AG INFORMATION NETWORK OF THE WEST

    Revolutionary Water Efficiency Technology and Cross-Border Collaboration to Defeat Screwworm

    Environmental cognitive distance, R&D capability distance, and supply chain green technology innovation – Nature

    Bridging Gaps: How Environmental and R&D Differences Drive Green Technology Innovation in Supply Chains

    LG Innotek CEO Moon Hyuksoo: “Our Next-gen Substrate Technology Will Change the Industry Paradigm” – TechPowerUp

    LG Innotek CEO Moon Hyuksoo: “Our Next-Gen Substrate Technology Will Revolutionize the Industry” Revolutionizing the Future: LG Innotek’s CEO Unveils Game-Changing Next-Gen Substrate Technology

    Inspira Technologies Secures Landmark $22.5M Deal: Major Revenue Breakthrough After FDA Clearance – Stock Titan

    Inspira Technologies Secures Landmark $22.5M Deal: Major Revenue Breakthrough After FDA Clearance – Stock Titan

    Meiwu Technology Company Limited and Shenzhen Zhinuo – GlobeNewswire

    Meiwu Technology Company Limited and Shenzhen Zhinuo – GlobeNewswire

    Trending Tags

    • Nintendo Switch
    • CES 2017
    • Playstation 4 Pro
    • Mark Zuckerberg
No Result
View All Result
Earth-News
No Result
View All Result
Home Technology

A visual guide to Vision Transformer – A scroll story

April 16, 2024
in Technology
Share on FacebookShare on Twitter

This is a visual guide to Vision Transformers (ViTs), a class of deep learning models that have achieved state-of-the-art performance on image classification tasks. Vision Transformers apply the transformer architecture, originally designed for natural language processing (NLP), to image data. This guide will walk you through the key components of Vision Transformers in a scroll story format, using visualizations and simple explanations to help you understand how these models work and how the flow of the data through the model looks like.

Like normal convolutional neural networks, vision transformers are trained in a supervised manner. This means that the model is trained on a dataset of images and their corresponding labels.

1) Focus on one data point

To get a better understanding of what happens inside a vision transformer lets focus on a single data point (batch size of 1). And lets ask the question: How is this data point prepared in order to be consumed by a transformer?

2) Forget the label for the moment

The label will become more relevant later. For now the only thing that we are left with is a single image.

3) Create patches of the image

To prepare the image for the use inside the transformer we divide the image into equally sized patches of size p x p.

4) Flatting of the images patches

The patches are now flattened into vectors of dimension p’=p²*c where p is the size of the patch and c is the number of channels.

5) Creating patch embeddings

These image patch vectors are now encoded using a linear transformation. The resulting Patch Embedding Vector has a fixed size d.

6) Embedding all patches

Now that we have embedded our image patches into vectors of fixed size, we are left with an array of size n x d where n is the the number of image patches and d is the size of the patch embedding

7) Appending a classification token

In order for us to effectively train our model we extend the array of patch embeddings by an additional vector called classification token (cls token). This vector is a learnable parameter of the network and is randomly initialized. Note: We only have one cls token and we append the same vector for all data points.

8) Add positional embedding Vectors

Currently our patch embeddings have no positional information associated with them. We remedy that by adding a learnable randomly initialized positional embedding vector to all our patch embeddings. We also add a such a positional embedding vector to our classification token.

9) Transformer Input

After the positional embedding vectors have been added we are left with an array of size (n+1) x d . This will be our input for the transformer which will be explained in greater detail in the next steps

10.1) Transformer: QKV Creation

Our transformer input patch embedding vectors are linearly embedded into multiple large vectors. These new vectors are than separated into three equal sized parts. The Q – Query Vector, the K – Key Vector and the V – Value Vector . We will have (n+1) of a all of those vectors.

10.2) Transformer: Attention Score Calculation

To calculate our attention scores A we will now multiply all of our query vectors Q with all of our key vectors K.

10.3)Transformer: Attention Score Matrix

Now that we have the attention score matrix A we apply a `softmax` function to every row such that every row sums up to 1.

10.4)Transformer: Aggregated Contextual Information Calculation

To calculate the aggregated contextual information for the first patch embedding vector. We focus on the first row of the attention matrix. And use the entires as weights for our Value Vectors V. The result is our aggregated contextual information vector for the first image patch embedding.

10.5)Transformer: Aggregated Contextual Information for every patch

Now we repeat this process for every row of our attention score matrix and the result will be N+1 aggregated contextual information vectors. One for every patch + one for the classification token. This steps concludes our first Attention Head.

10.6)Transformer: Multi-Head Attention

Now because we are dealing multi head attention we repeat the entire process from step 10.1 – 10-5 again with a different QKV mapping. For our explanatory setup we assume 2 Heads but typically a VIT has many more. In the end this results in multiple Aggregated contextual information vectors.

10.7)Transformer: Last Attention Layer Step

These heads are stacked together and are mapped to vectors of size d which was the same size as our patch embeddings had.

10.8)Transformer: Attention Layer Result

The previous step concluded the attention layer and we are left with the same amount of embeddings of exactly the same size as we used as input.

10.9)Transformer: Residual connections

Transformers make heavy use of residual connections which simply means adding the input of the previous layer to the output the current layer. This is also something that we will do now.

10.10)Transformer: Residual connection Result

The addition results in vectors of the same size.

10.11)Transformer: Feed Forward Network

Now these outputs are feed through a feed forward neural network with non linear activation functions

10.12)Transformer: Final Result

After the transformer step there is another residual connections which we will skip here for brevity. And so the last step concluded the transformer layer. In the end the transformer produced outputs of the same size as input.

11) Repeat Transformers

Repeat the entire transformer calculation Steps 10.1 – Steps 10.12 for the Transformer several times e.g. 6 times.

12) Identify Classification token output

Last step is to identify the classification token output. This vector will be used in the final step of our Vision Transformer journey.

13) Final Step: Predicting classification probabilities

In the final and last step we use this classification output token and another fully connected neural network to predict the classification probabilities of our input image.

We train the Vision Transformer using a standard cross-entropy loss function, which compares the predicted class probabilities with the true class labels. The model is trained using backpropagation and gradient descent, updating the model parameters to minimize the loss function.

In this visual guide, we have walked through the key components of Vision Transformers, from the data preparation to the training of the model. We hope this guide has helped you understand how Vision Transformers work and how they can be used to classify images.

I prepared this little Colab Notebook to help you understand the Vision Transformer even better. Please have look for the ‘Blogpost’ comment. The code was taken from @lucidrains great VIT Pytorch implementation be sure to checkout his work.

If you have any questions or feedback, please feel free to reach out to me. Thank you for reading!

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Hacker News – https://blog.mdturp.ch/posts/2024-04-05-visual_guide_to_vision_transformer.html

Tags: guidetechnologyVisual
Previous Post

Product-Market Fit Isn’t a Black Box – A New Framework to Help B2B Founders

Next Post

Mobile Ad Blocker Will No Longer Stop YouTube’s Ads

EIFO invests $5 million in D3, the Ukraine-focused defence technology venture fund – sUAS News

EIFO Pledges $5 Million to Supercharge Ukraine-Focused Defense Technology Fund

July 6, 2025
Winning sports’ woke war is great, but unfortunately it’s led by the absurd – New York Post

The Thrilling Battle for Sports Culture Is Fueled by the Absurd

July 6, 2025
Tradition, ecology, and communion: the Church experiences a week of renewal and global commitment – exaudi.org

Tradition, Ecology, and Communion: The Church’s Inspiring Week of Renewal and Global Commitment

July 5, 2025
Shriners Children’s to Establish Research Institute at Science Square – Georgia Tech News Center

Shriners Children’s to Establish Research Institute at Science Square – Georgia Tech News Center

July 5, 2025
Scientists warn US will lose a generation of talent because of Trump cuts | Trump administration – The Guardian

Scientists Warn Trump-Era Cuts Threaten to Drain a Generation of American Talent

July 5, 2025
7 outdated boomer behaviors that were once normal but now feel tone-deaf – VegOut

7 Outdated Boomer Behaviors That Once Seemed Normal but Now Feel Completely Tone-Deaf

July 5, 2025
The Conversation: Freedom in the modern world – standard.net

Embracing Freedom in the Modern World: Challenges and Opportunities

July 5, 2025
The U.S. added 147,000 jobs in June, relieving some concerns of a workforce slowdown – NBC News

U.S. Adds 147,000 Jobs in June, Easing Fears of Workforce Slowdown

July 5, 2025
Magicians and Battlebots light up Las Vegas entertainment scene – KSNV

Magicians and Battlebots Take Las Vegas Entertainment by Storm

July 5, 2025
Final House Vote on Devastating Health and Food Assistance Cuts – Medicare Rights Center

House Prepares to Vote on Major Cuts Jeopardizing Health and Food Assistance Programs

July 5, 2025

Categories

Archives

July 2025
MTWTFSS
 123456
78910111213
14151617181920
21222324252627
28293031 
« Jun    
Earth-News.info

The Earth News is an independent English-language daily published Website from all around the World News

Browse by Category

  • Business (20,132)
  • Ecology (706)
  • Economy (732)
  • Entertainment (21,621)
  • General (15,731)
  • Health (9,769)
  • Lifestyle (736)
  • News (22,149)
  • People (732)
  • Politics (740)
  • Science (15,949)
  • Sports (21,231)
  • Technology (15,716)
  • World (712)

Recent News

EIFO invests $5 million in D3, the Ukraine-focused defence technology venture fund – sUAS News

EIFO Pledges $5 Million to Supercharge Ukraine-Focused Defense Technology Fund

July 6, 2025
Winning sports’ woke war is great, but unfortunately it’s led by the absurd – New York Post

The Thrilling Battle for Sports Culture Is Fueled by the Absurd

July 6, 2025
  • About
  • Advertise
  • Privacy & Policy
  • Contact

© 2023 earth-news.info

No Result
View All Result

© 2023 earth-news.info

No Result
View All Result

© 2023 earth-news.info

Go to mobile version