Future Nostalgia: Reimagining our Taiwan 2024 vacation with Flux Kontext
Iphone -> Kodak
I've been exploring a provocative question: can we transform our digital footprints—commonly known as “photos”—into curated, emotionally resonant memories? This isn't just about photo editing; it's about whether artificial intelligence can authentically recreate the feeling of remembrance itself. I have a few things coming up soon regarding this project, but in the meantime, here a small experiment I was able to perform using Flux Kontext.
Rediscovering Analog Magic
My partner recently got re-obsessed with his old camera collection, trying out different cameras at any other occasion (including some analog film). Looking at our most recent memories, captured through these nostalgic lenses, made us feel reminiscent about something that happened only 2 days ago.
This sparked an experiment: Could AI simulate this experience of manufactured nostalgia? Are we there yet?
I’m a bad photographer, am I destined to a life of miserable memories?
Image-to-image editing has evolved dramatically in recent months, particularly since OpenAI launched their 4o image generator. A few days ago, BFL released Flux Kontext, which many hail as the new “chatGPT killer” for it’s abilities at image editing and prompt adherence.
A significant challenge for these models to work with real life inputs has always been character consistency—maintaining the essence of a person while transforming their context. This is where we encounter the uncanny valley - that unsettling space where AI-generated faces are almost, but not quite, convincingly human, or in the images below - me1.
Experiment 1: Elevating the Everyday
I'm admittedly a mediocre photographer. My partner has few flattering photos from our recent vacation, including this snapshot I captured on Green Island, Taiwan with my Pixel 7:
Rather than wrestling with Lightroom presets (not my expertise) I decided to test whether AI could elevate this ordinary moment into something more cinematic.
“Convert this image to an editorial fashion photograph, shot in Fuji film 35 mm lens at golden hour. Preserve persons posture and facial characteristics.”
Using Flux Kontext, with the above mentioned prompt, I got these results (on my first try!):
I think that’s amazing. No LoRA training, no faceswap workflow. Just a prompt, and we’re able to achieve a style transfer with image consistency. These results would usually require using a complex comfyUI with multiple nodes, and multiple generations to get things just about right.
Fake Images, True Memories
But here's where it gets philosophically fascinating. These generated images are technically false—they document moments that never existed in exactly this way. Yet they might represent our memories more accurately than the original photographs ever could.
Consider this paradox: fake images can embody true memories. When I look at the AI-enhanced version of my partner on Green Island, I don't just see a better photograph; I see how the moment actually felt to me. The generated image, while technically false, becomes a more faithful representation of my internal experience. It's not documenting what the camera saw—it's documenting what I remember feeling.
This challenges our fundamental assumptions about photographic truth:
Could AI-generated "false" images actually be more honest representations of how we experienced those moments? A radical example of this is the “saved memories” project, which help trans people modify childhood photographs to reflect their gender identity.
When memory is inherently subjective and emotionally filtered, why should we privilege mechanical accuracy over experiential truth? We’re already used to photoshopped images, isn’t GenAI enhanced imagery simply the next step?
We're not creating false memories—we're finally creating images that match the memories we already have.
Experiment 2: The Album Cover Challenge
I pushed the boundaries further with a more complex task: creating a cohesive album cover collage, something I'd struggled with for months using various state-of-the-art techniques.

Flux Kontext is *almost there - I got results that managed to be a lot more consistent than chatGPT (in all aspects - adherence, character consistency, composition), but the amount of visual tasks it had to master made it hard do all of these using a single inference task.
The technology is approaching a threshold where generated memories might become indistinguishable from captured ones.
I’m looking forward to some future improvements in these aspects:
Multi image input.
Compatibility to LoRA training (that could be incredible). This may be possible using Flux.1 Kontext [Dev] version, but let’s wait and see.
Improved prompt adherence for complex visual narratives (though multiple iterations could possibly do the trick as well).
A Taiwan That Never Was
Anyway, here are some photos we “took” in Taiwan, portraying some moments in Taipei and taking a bicycle trip down the eastern coast of the island. Captured using a cinestill film flux lora by adirik.
Other attempts with PulID \ ACE ++ have yielded more convincing results, but required a significant amount of effort. An alternative approach could be training a LoRA for my character.









Really loved this! Such a thoughtful exploration of how AI reshapes our emotional connection to the past. The idea of anemoia feels especially relevant in this digital age.
These experiments are really fascinating and you have some pretty great results, still I can’t shake off the uneasy feeling of tampering with our memories with these AI versions. But then again even our “regular” cellphone photos undergo software manipulation all the time, so where’s the line? And is there truly a line in the first place?