Generative AI can turn your most precious memories into photos that never existed
Maria grew up in Barcelona, Spain, in the 1940s. Her first memories of her father are vivid. As a six-year-old, Maria would visit a neighbor’s apartment in her building when she wanted to see him. From there, she could peer through the railings of a balcony into the prison below and try to catch a glimpse of him through the small window of his cell, where he was locked up for opposing the dictatorship of Francisco Franco.
There is no photo of Maria on that balcony. But she can now hold something like it: a fake photo—or memory-based reconstruction, as the Barcelona-based design studio Domestic Data Streamers puts it—of the scene that a real photo might have captured. The fake snapshots are blurred and distorted, but they can still rewind a lifetime in an instant.
“It’s very easy to see when you’ve got the memory right, because there is a very visceral reaction,” says Pau Garcia, founder of Domestic Data Streamers. “It happens every time. It’s like, ‘Oh! Yes! It was like that!’”
Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.
Subscribe now
Already a subscriber?
Sign in
Dozens of people have now had their memories turned into images in this way via Synthetic Memories, a project run by Domestic Data Streamers. The studio uses generative image models, such as OpenAI’s DALL-E, to bring people’s memories to life. Since 2022, the studio, which has received funding from the UN and Google, has been working with immigrant and refugee communities around the world to create images of scenes that have never been photographed, or to re-create photos that were lost when families left their previous homes.
Now Domestic Data Streamers is taking over a building next to the Barcelona Design Museum to record people’s memories of the city using synthetic images. Anyone can show up and contribute a memory to the growing archive, says Garcia.
Synthetic Memories could prove to be more than a social or cultural endeavor. This summer, the studio will start a collaboration with researchers to find out if its technique could be used to treat dementia.
Memorable graffiti
The idea for the project came from an experience Garcia had in 2014, when he was working in Greece with an organization that was relocating refugee families from Syria. A woman told him that she was not afraid of being a refugee herself, but she was afraid of her children and grandchildren staying refugees because they might forget their family history: where they shopped, what they wore, how they dressed.
Garcia got volunteers to draw the woman’s memories as graffiti on the walls of the building where the families were staying. “They were really bad drawings, but the idea for synthetic memories was born,” he says. Several years later, when Garcia saw what generative image models could do, he remembered that graffiti. ”It was one of the first things that came to mind,” he says.
The process that Garcia and his team have developed is simple. An interviewer sits down with a subject and gets the person to recall a specific scene or event. A prompt engineer with a laptop uses that recollection to write a prompt for a model, which generates an image.
His team has built up a kind of glossary of prompting terms that have proved to be good at evoking different periods in history and different locations. But there’s often some back and forth, some tweaks to the prompt, says Garcia: “You show the image generated from that prompt to the subject and they might say, ‘Oh, the chair was on that side’ or ‘It was at night, not in the day.’ You refine it until you get it to a point where it clicks.”
So far Domestic Data Streamers has used the technique to preserve the memories of people in various migrant communities, including Korean, Bolivian, and Argentine families living in São Paolo, Brazil. But it has also worked with a care home in Barcelona to see how memory-based reconstructions might help older people. The team collaborated with researchers in Barcelona on a small pilot with 12 subjects, applying the approach to reminiscence therapy—a treatment for dementia that aims to stimulate cognitive abilities by showing someone images of the past. Developed in the 1960s, reminiscence therapy has many proponents, but researchers disagree on how effective it is and how it should be done.
The pilot allowed the team to refine the process and ensure that participants could give informed consent, says Garcia. The researchers are now planning to run a larger clinical study in the summer with colleagues at the University of Toronto to compare the use of generative image models with other therapeutic approaches.
One thing they did discover in the pilot was that older people connected with the images much better if they were printed out. “When they see them on a screen, they don’t have the same kind of emotional relation to them,” says Garcia. “But when they could see it physically, the memory got much more important.”
Blurry is best
The researchers have also found that older versions of generative image models work better than newer ones. They started the project using two models that came out in 2022: DALL-E 2 and Stable Diffusion, a free-to-use generative image model released by Stability AI. These can produce images that are glitchy, with warped faces and twisted bodies. But when they switched to the latest version of Midjourney (another generative image model that can create more detailed images), the results did not click with people so well.
“If you make something super-realistic, people focus on details that were not there,” says Garcia. “If it’s blurry, the concept comes across better. Memories are a bit like dreams. They do not behave like photographs, with forensic details. You do not remember if the chair was red or green. You simply remember that there was a chair.”
The team has since gone back to using the older models. “For us, the glitches are a feature,” says Garcia. “Sometimes things can be there and not there. It’s kind of a quantum state in the images that works really well with memories.”
Sam Lawton, an independent filmmaker who is not involved with the studio, is excited by the project. He’s especially happy that the team will be looking at the cognitive effects of these images in a rigorous clinical study. Lawton has used generative image models to re-create his own memories. In a film he made last year, called Expanded Childhood, he used DALL-E to extend old family photos beyond their borders, blurring real childhood scenes with surreal ones.
“The effect exposure to this kind of generated imagery has on a person’s brain was what spurred me to make the film in the first place,” says Lawton. “I was not in a position to launch a full-blown research effort, so I pivoted to the kind of storytelling that’s most natural to me.”
Lawton’s work explores a number of questions: What will long-term exposure to AI-generated or altered images have on us? Can such images help reframe traumatic memories? Or do they create a false sense of reality that can lead to confusion and cognitive dissonance?
Lawton showed the images in Expanded Childhood to his father and included his comments in the film: “Something’s wrong. I don’t know what that is. Do I just not remember it?”
Garcia is aware of the dangers of confusing subjective memories with real photographic records. His team’s memory-based reconstructions are not meant to be taken as factual documents, he says. In fact, he notes that this is another reason to stick with the less photorealistic images produced by older versions of generative image models. “It is important to differentiate very clearly what is synthetic memory and what is photography,” says Garcia. “This is a simple way to show that.”
But Garcia is now worried that the companies behind the models might retire their previous versions. Most users look forward to bigger and better models; for Synthetic Memories, less can be more. “I’m really scared that OpenAI will close DALL-E 2 and we will have to use DALL-E 3,” he says.