• Work
  • About
  • Contact
Nicola Carpeggiani

Nicola Carpeggiani

  • Work
  • About
  • Contact

BRAINROT

Brain Rot

Short Project Introduction

Brain Rot is an audiovisual installation about social media overload by Nicola Carpeggiani and Alessandro Roberti (Studio Cliché), with sound design by CANYF.
Presented at Videocittà Expo 2025 (July 2025) and Moebius – La Rampa Prenestina (December 2025), the project explores cognitive saturation and the erosion of meaning caused by continuous exposure to fragmented digital content.

Through a real-time generative audiovisual system, Brain Rot constructs a destabilized media flow in which images, identities, and narratives dissolve into repetition and overstimulation, reflecting on the compulsive logic of contemporary digital feeds.

Technical Notes / Process

Concept & System

Brain Rot is a real-time generative installation reflecting on cognitive overload and the erosion of meaning caused by continuous exposure to fragmented media streams.
The project treats contemporary visual information as an unstable system in which images, identities, and narratives collapse into an indistinct flow, prioritizing stimulation over understanding.

Generative Logic

Social media footage is procedurally remixed in real time and visually glitched through non-linear transformations.
Faces are selectively removed using tracking and computer vision techniques (MediaPipe), preserving the action while erasing authorship and identity.
Additional object recognition processes generate unstable semantic labels that fragment and overload the visual narrative.

A simulated “fake reel” environment is generated within the system, composed of AI-generated imagery, algorithmic text, and live camera input.
Video content and textual elements are randomly assigned to each post; their combinations can be reconfigured through a seed-based system, enabling controlled variability and non-repetitive behaviors.

A live video feed captured via an Azure Kinect camera is embedded within the fake reel, placing the audience inside the artificial content stream and reinforcing the tension between real presence and constructed media narratives.

Visual & Spatial Output

The installation is distributed across a multi-screen setup composed of CRT televisions and vertically oriented LCD displays.
A central media server provides three independent outputs, each routed through dedicated video wall controllers and further subdivided into individual signals feeding the single displays.
This layered distribution system generates a fragmented and non-uniform visual field that mirrors the conceptual theme of information overload.

Pipeline & Control

TouchDesigner functions as the core real-time generative engine, managing video remixing, AI content integration, camera input, and system logic.

Computer vision processes based on MediaPipe are integrated within the pipeline for face removal and object recognition.

Visual output is routed into Resolume Arena for final playback, mapping, and distribution across the multi-screen setup.

Resolume Arena is controlled through a timeline-based structure driven by Ableton Live via MIDI messaging, with each visual event triggered by dedicated MIDI signals.
Ableton Live also hosts the sound design and functions as the main temporal and control layer of the system.

Audio–visual synchronization is handled via a dedicated audio interface and mixed digital-to-analog signal routing.

Tools

TouchDesigner · MediaPipe (computer vision & tracking) · Resolume Arena · Ableton Live · Adobe After Effects · AI image & text generation tools · Video wall controllers · CRT & LCD displays

Role

Concept development, system and spatial design, multi-screen mapping and layout, real-time generative visuals, interactive pipeline integration

SITO_unnamed-819x1024.jpg