Bugdayci
United Kingdom
Artist Statement
Description
Consider the image forming in your mind as you encounter these words. It materialized before photons reached your retina, assembled from the vast archive of every sentence you've processed, every pattern you've internalized. Perception, it turns out, is largely an act of sophisticated remembering; memory animated by just enough present-moment data to feel immediate. Every frame that registers in your consciousness has already been filtered through layers of expectation, shaped by the ghost impressions of everything you've encountered before. Priors makes this invisible architecture visible. Across sixteen chapters of 4K vertical moving imagery, the work constructs a field of perpetual becoming—where each moment emerges from the reconciliation of what was remembered and what might be. Built through custom generative processes that mirror the brain's own predictive machinery, these moving images exist in a state of constant anticipation, forever reaching toward forms that hover just beyond recognition. This is perception laid bare: not the passive recording of reality, but its active invention. The system learns, predicts, adjusts much like consciousness itself, generating experience through the endless negotiation between prior knowledge and incoming sensation. What emerges is neither memory nor prediction, but the very space where both converge into the present moment of seeing.
Process
I became fascinated by the parallels between how our brains construct reality and how machine learning systems process information. We often think of perception as simply receiving what's "out there," but neuroscience shows us it's much more active. We're constantly predicting what we'll encounter based on everything we've experienced before. What really struck me was realizing that both human and artificial vision systems are fundamentally doing a similar thing: building models of the world and then using those models to interpret new information. We're all working with "priors" and these accumulated expectations that shape what we see before we even see it. I wanted to make this invisible process visible. The generative system I built mirrors this cycle of prediction and revision, where each of the 16 chapters develops its own visual language based on different dataset. It's like watching 16 different ways of seeing emerge from the same starting point. The vertical format creates this sense of falling or flowing through different states of perception, rather than the traditional horizontal narrative we're used to. I wanted viewers to feel like they were moving through layers of consciousness, experiencing how vision shifts between memory and anticipation. Ultimately, I was curious about what it means to see in an age where machines are learning to see alongside us.
Tools
Priors was created through a multi-layered generative workflow using ComfyUI. The process draws on foundational technologies from deep learning, computer vision, and generative modeling, integrating style-driven synthesis, structural conditioning, and temporal animation. I developed this approach to create a system where image, motion, and perception are genuinely co-authored by human input and machine inference, rather than simply human-directed. The first stage focuses on style and motion generation using latent diffusion models trained on personal visual material and open-source data. The second introduces multi-modal structural control through various guidance methods. The final stage creates localized transformations and emergent behaviors within the frame. Each stage builds on the previous one, allowing technical control and creative emergence to work in dialogue. The 16-minute sequence was composited in DaVinci Resolve.