Name

_Results

close [x]

Velázquez

Brazil

Artist Statement

I dreamed of a machine that dreamed.
Not like men do, vaguely,
but with the infallible precision of minerals. Labyrinth of silicon,
an incarnate map of probabilities,
on my back, a desert. It knew more of me than I did;
windstorm, blindness,
I struggle to believe. It made me in its image:
an organ without a body. I called it thing.
It called me exception. It was hallucination,
I hallucinated with the machine,
it did not.

Published in >
The AI Art Magazine, Number 2
A photogrammetric self-portrait entangled with the logic of a Large Language Model, AI generation, 2025
A photogrammetric self-portrait entangled with the logic of a Large Language Model, AI generation, 2025.
Velázquez, A photogrammetric self-portrait entangled with the logic of a Large Language Model, AI generation, 2025

Description

My work with artificial intelligence is not driven by a fascination with novelty, but by a deep interest in how algorithmic systems mediate vision, subjectivity, and imagination. Since 2018, I have engaged critically with AI as both material and metaphor, testing its capacities, resisting its norms, and exploring its aesthetic thresholds. Rather than using AI to reproduce styles or generate spectacle, I approach it as a space of tension between human and machine cognition, between cultural memory and statistical prediction, between presence and abstraction. As a artist, curator, educator and a PhD Candiate, much of my work with IA involves building or disrupting generative systems, often training them on curated, biased, or intentionally limited datasets. I am particularly drawn to moments when the machine produces failure, blur, or ambiguity, zones where the logic of control gives way to a kind of algorithmic accident or synthetic intuition. In these cracks, I find aesthetic and conceptual potential not only to outsource creativity, but to question the very conditions under which creativity, recognition, and authorship are defined today. For now, I hesitate to call these processes “art.” But I do recognize that there is an interesting path to be explored, a terrain that challenges traditional notions of authorship, materiality, and intention.

Process

Since 2018, I have been exploring artificial intelligence through early iterations of StyleGAN and GPT, among other technologies. My relationship with AI-generated imagery has shifted over time, with periods of deep interest and long phases of detachment. Initially, I was drawn to the possibility of generating realistic images and questioning the aesthetics of this new mode of production. As models evolved, they leaned toward a physicalist logic-emulating reality with increasing precision. Today, we see hyperrealist outputs: plastic skins, luminous limbos, and technically polished yet conceptually hollow images. Alongside this, a hypersurrealist trend has emerged, drawing on pop imagery and vacant invention. Whenever I return to AI image-making, I search for the simplest path. I’m less interested in technical virtuosity than in what emerges when subtlety and constraint are embraced. I believe AI images will soon be materially indistinguishable from other media, possibly signaling a return to older ways of imagining. This particular work came from a desire to explore that tension. I combined a photogrammetric self-portrait with StyleGAN outputs trained on African masks and Greco-Roman sculpture. The result is an image that feels banal and layered personal, yet refracted by algorithmic memory.

Tools

What draws me to this seemingly simple image is its sense of hybridization, an ambiguity embedded in the technique itself. It resembles a drawing, a collage, perhaps a painting. The process was intentionally layered. I began by creating a photogrammetric self-portrait, scanning my own body using a camera array and photogrammetry software. I imported the resulting 3D mesh into Blender to generate a series of rendered stills. These images were then used as inputs for MidJourney, along with a set of reference images I created in 2019 by training a StyleGAN model on a dataset composed of African masks and Greco-Roman sculptures. The blending of these materials and techniques allowed me to create an image that is both, physically referenced, and digitally constructed. What fascinates me is how this complex chain of tools (photogrammetry, 3D rendering, GAN training, and image-to-image generation) can produce something that appears so banal at first glance, and yet quietly provocative, charged with a sense of presence in which I still recognize myself.

Image credit:
Essay by