Date | Moderator | Moderator Link | Topic | To Read |
20.09.2024 | Adam Harmanec | YOLOv10: Real-Time End-to-End Object Detection | |
19.07.2024 | Adam Harmanec | Trackastra: Transformer-based cell tracking for live-cell microscopy | |
07.06.2024 | Tomas Karella | Making Convolutional Networks Shift-Invariant Again | |
24.05.2024 | Michal Bartos | Plug-and-Play Image Restoration with Deep Denoiser Prior | |
26.04.2024 | Tomas Karella | Deep Networks with Stochastic Depth | |
12.04.2024 | Jan Kotera | Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture | |
15.03.2024 | Adam Harmanec | TrackFormer: Multi-Object Tracking with Transformers | |
25.02.2024 | Tomas Karella | CoAtNet: Marrying Convolution and Attention for All Data Sizes | |
16.02.2024 | Adam Novozamsky | Learning Continuous Image Representation with Local Implicit Image Function | |
19.01.2024 | Tomas Kerepecky | NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis | |
05.01.2024 | Tomas Karella | ImageBind: One Embedding Space To Bind Them All | |
15.12.2023 | Adam Harmanec | DETR: End-to-End Object Detection with Transformers | |
10.11.2023 | Tomas Karella | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | |
20.10.2023 | Adam Harmanec | Progressive Distillation for Fast Sampling of Diffusion Models | |
08.08.2023 | Vaclav Kosik | LORA: Low-Rank Adaptation of Large Language Models | |
18.08.2023 | Antonie Brozova | A Unified Framework for U-Net Design and Analysis | |
14.07.2023 | Adam Harmanec | Space-Time Correspondence as a Contrastive Random Walk | |
29.06.2023 | Filip Sroubek | Amortised MAP Inference for Image Super-resolution | |
12.06.2023 | Tomas Karella | DINOv2: A Self-supervised Vision Transformer Model | |
02.06.2023 | Adam Harmanec | DINO: Emerging Properties in Self-Supervised Vision Transformers | |
19.05.2023 | Adam Novozamsky | CLIP: Learning Transferable Visual Models From Natural Language Supervision | |
05.05.2023 | Tomas Karella | SAM: Segment Anything | |
14.04.2023 | Tomas Karella | How Do Vision Transformers Work? | |
24.03.2023 | Tomas Karella | LDMs: High-Resolution Image Synthesis with Latent Diffusion Models | |
03.03.2023 | Tomas Kerepecky | DMs: Denoising Diffusion Probabilistic Models | |
14.02.2023 | Adam Harmanec | ConvNeXt: A ConvNet for the 2020s | |
20.01.2023 | Tomas Karella | MaxViT: Multi-Axis Vision Transformer | |
02.12.2022 | Tomas Karella | A Survey of Visual Transformers | |