Name

_Results

close [x]

Bugdayci

United Kingdom

Artist Statement

Experience and the Machine. My work explores how perception is increasingly co-authored by machine intelligence. Using digital media, generative algorithms, and custom AI workflows, I create artworks that make tangible the invisible entanglement between how we see and how machines see with us. The underlying premise is simple: perception is not passive reception, it is a predictive construction. Your brain generates models of what comes next and interestingly, machine learning systems do something remarkably similar. Really what is it about? Each piece invites viewers into a world where their perceptual assumptions are subtly rerouted. In one work, a machine-generated landscape is shaped by fluid simulations to reconstruct how we experience depth and perspective. Rather than being something you move through, the scene flows towards you. In another, viewers interact with an AI to complete missing visual inputs. The machine offers no fixed answers, only probabilistic predictions drawn from its training data, turning recognition into a shared and uncertain process. A third piece constructs an artificial ecosystem from minimal cues such as flickers, repetition, reactivity testing how little information is needed for something to feel sentient. Across these works, perception is not passive, but an active negotiation between human expectation and machine inference. Humans are so full of themselves. My interest is not in “re”-representing nature, but in understanding how intelligence operates across bodies, environments, and systems. The structures we call natural and the processes we call artificial often behave in strikingly similar ways: they adapt, optimize, stabilize, and respond. Here, intelligence isn’t a trait, it’s a function, a way of modeling the world in order to act within it. Once we stop insisting on the primacy of human thought, it becomes possible to recognize cognition in other materials, other rhythms like a windswept terrain, in reactive code, in a flicker mistaken for an intentful gesture. Nature is algorithmic. And so are we. I thought we were over this… (Dualisms) “Natural” perception was never purely human. From lenses to sensors, our ways of seeing have always been technologically mediated. Today’s AI systems don’t sever us from the natural world; they expose how entangled we’ve always been. Rather than defending the boundary between synthetic and biological awareness, I explore what becomes possible when different forms of attenti

Published in >
The AI Art Magazine, Number 2
Priors, AI generation, 2025
Priors, AI generation, 2025.
Bugdayci, Priors, AI generation, 2025

Description

Consider the image forming in your mind as you encounter these words. It materialized before photons reached your retina, assembled from the vast archive of every sentence you've processed, every pattern you've internalized. Perception, it turns out, is largely an act of sophisticated remembering; memory animated by just enough present-moment data to feel immediate. Every frame that registers in your consciousness has already been filtered through layers of expectation, shaped by the ghost impressions of everything you've encountered before. Priors makes this invisible architecture visible. Across sixteen chapters of 4K vertical moving imagery, the work constructs a field of perpetual becoming—where each moment emerges from the reconciliation of what was remembered and what might be. Built through custom generative processes that mirror the brain's own predictive machinery, these moving images exist in a state of constant anticipation, forever reaching toward forms that hover just beyond recognition. This is perception laid bare: not the passive recording of reality, but its active invention. The system learns, predicts, adjusts much like consciousness itself, generating experience through the endless negotiation between prior knowledge and incoming sensation. What emerges is neither memory nor prediction, but the very space where both converge into the present moment of seeing.

Process

I became fascinated by the parallels between how our brains construct reality and how machine learning systems process information. We often think of perception as simply receiving what's "out there," but neuroscience shows us it's much more active. We're constantly predicting what we'll encounter based on everything we've experienced before. What really struck me was realizing that both human and artificial vision systems are fundamentally doing a similar thing: building models of the world and then using those models to interpret new information. We're all working with "priors" and these accumulated expectations that shape what we see before we even see it. I wanted to make this invisible process visible. The generative system I built mirrors this cycle of prediction and revision, where each of the 16 chapters develops its own visual language based on different dataset. It's like watching 16 different ways of seeing emerge from the same starting point. The vertical format creates this sense of falling or flowing through different states of perception, rather than the traditional horizontal narrative we're used to. I wanted viewers to feel like they were moving through layers of consciousness, experiencing how vision shifts between memory and anticipation. Ultimately, I was curious about what it means to see in an age where machines are learning to see alongside us.

Tools

Priors was created through a multi-layered generative workflow using ComfyUI. The process draws on foundational technologies from deep learning, computer vision, and generative modeling, integrating style-driven synthesis, structural conditioning, and temporal animation. I developed this approach to create a system where image, motion, and perception are genuinely co-authored by human input and machine inference, rather than simply human-directed. The first stage focuses on style and motion generation using latent diffusion models trained on personal visual material and open-source data. The second introduces multi-modal structural control through various guidance methods. The final stage creates localized transformations and emergent behaviors within the frame. Each stage builds on the previous one, allowing technical control and creative emergence to work in dialogue. The 16-minute sequence was composited in DaVinci Resolve.

Image credit:
Essay by